Commit Graph

1871 Commits

Author SHA1 Message Date
Martin Schwidefsky
fc5243d98a [S390] arch_setup_additional_pages arguments
arch_setup_additional_pages currently gets two arguments, the binary
format descripton and an indication if the process uses an executable
stack or not. The second argument is not used by anybody, it could
be removed without replacement.

What actually does make sense is to pass an indication if the process
uses the elf interpreter or not. The glibc code will not use anything
from the vdso if the process does not use the dynamic linker, so for
statically linked binaries the architecture backend can choose not
to map the vdso.

Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2008-12-25 13:38:54 +01:00
Dale Farnsworth
f8f50b1bdd powerpc/32: Wire up the trampoline code for kdump
Wire up the trampoline code for ppc32 to relay exceptions from the
vectors at address 0 to vectors at address 32MB, and modify Kconfig
to enable Kdump support for all classic powerpcs.

Signed-off-by: Dale Farnsworth <dale@farnsworth.org>
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-23 15:13:29 +11:00
Dale Farnsworth
ccdcef72c2 powerpc/32: Add the ability for a classic ppc kernel to be loaded at 32M
Add the ability for a classic ppc kernel to be loaded at an address
of 32MB.  This done by fixing a few places that assume we are loaded
at address 0, and by changing several uses of KERNELBASE to use
PAGE_OFFSET, instead.

Signed-off-by: Dale Farnsworth <dale@farnsworth.org>
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-23 15:13:29 +11:00
Dale Farnsworth
6f29c3298b powerpc/32: Setup OF properties for kdump
Refactor the setting of kdump OF properties, moving the common code
from machine_kexec_64.c to machine_kexec.c where it can be used on
both ppc64 and ppc32.  This will be needed for kdump to work on ppc32
platforms.

Signed-off-by: Dale Farnsworth <dale@farnsworth.org>
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-23 15:13:29 +11:00
Anton Vorontsov
7375331388 powerpc/32/kdump: Implement crash_setup_regs() using ppc_save_regs()
This replaces the dummy crash_setup_regs function with full-fledged
crash_setup_regs implementation.  On PPC32 we simply use the new
ppc_save_regs function to dump the registers.

Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-23 15:13:28 +11:00
Anton Vorontsov
322b439455 powerpc: Prepare xmon_save_regs for use with kdump
Today the arch/powerpc/xmon/setjmp.S file contains only the
xmon_save_regs function.  We want to use it for kdump purposes, so
let's move the file into arch/powerpc/kernel/ and give the function a
more generic name (ppc_save_regs).

Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-23 15:13:28 +11:00
Anton Vorontsov
77733f8a33 powerpc: Make default kexec/crash_kernel ops implicit
This removes the need for each platform to specify default kexec and
crash kernel ops, thus effectively adds a working kexec support for
most 6xx/7xx/7xxx-based boards.

Platforms that can't cope with default ops will explode in some weird
way (a hang or reboot is most likely), which means that the board's
kexec support should be fixed or blacklisted via dummy _prepare
callback returning -ENOSYS.

Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-23 15:13:28 +11:00
Dale Farnsworth
2e8e4f5b80 powerpc: Setup OF properties for ppc32 kexec
Refactor the setting of kexec OF properties, moving the common code
from machine_kexec_64.c to machine_kexec.c where it can be used on
both ppc64 and ppc32.  This is needed for kexec to work on ppc32
platforms.

Signed-off-by: Dale Farnsworth <dale@farnsworth.org>
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-23 15:13:28 +11:00
Benjamin Herrenschmidt
9dce3ce5c5 powerpc/44x: 44x TLB doesn't need "Guarded" set for all pages
After discussing with chip designers, it appears that it's not
necessary to set G everywhere on 440 cores. The various core
errata related to prefetch should be sorted out by firmware by
disabling icache prefetching in CCR0. We add the workaround to
the kernel however just in case oooold firmwares don't do it.

This is valid for -all- 4xx core variants. Later ones hard wire
the absence of prefetch but it doesn't harm to clear the bits
in CCR0 (they should already be cleared anyway).

We still leave G=1 on the linear mapping for now, we need to
stop over-mapping RAM to be able to remove it.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21 14:21:16 +11:00
Benjamin Herrenschmidt
64b3d0e812 powerpc/mm: Rework usage of _PAGE_COHERENT/NO_CACHE/GUARDED
Currently, we never set _PAGE_COHERENT in the PTEs, we just OR it in
in the hash code based on some CPU feature bit.  We also manipulate
_PAGE_NO_CACHE and _PAGE_GUARDED by hand in all sorts of places.

This changes the logic so that instead, the PTE now contains
_PAGE_COHERENT for all normal RAM pages thay have I = 0 on platforms
that need it.  The hash code clears it if the feature bit is not set.

It also adds some clean accessors to setup various valid combinations
of access flags and change various bits of code to use them instead.

This should help having the PTE actually containing the bit
combinations that we really want.

I also removed _PAGE_GUARDED from _PAGE_BASE on 44x and instead
set it explicitely from the TLB miss.  I will ultimately remove it
completely as it appears that it might not be needed after all
but in the meantime, having it in the TLB miss makes things a
lot easier.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21 14:21:16 +11:00
Benjamin Herrenschmidt
7752035180 powerpc/mm: Runtime allocation of mmu context maps for nohash CPUs
This makes the MMU context code used for CPUs with no hash table
(except 603) dynamically allocate the various maps used to track
the state of contexts.

Only the main free map and CPU 0 stale map are allocated at boot
time.  Other CPU maps are allocated when those CPUs are brought up
and freed if they are unplugged.

This also moves the initialization of the MMU context management
slightly later during the boot process, which should be fine as
it's really only needed when userland if first started anyways.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21 14:21:16 +11:00
Benjamin Herrenschmidt
2a4aca1144 powerpc/mm: Split low level tlb invalidate for nohash processors
Currently, the various forms of low level TLB invalidations are all
implemented in misc_32.S for 32-bit processors, in a fairly scary
mess of #ifdef's and with interesting duplication such as a whole
bunch of code for FSL _tlbie and _tlbia which are no longer used.

This moves things around such that _tlbie is now defined in
hash_low_32.S and is only used by the 32-bit hash code, and all
nohash CPUs use the various _tlbil_* forms that are now moved to
a new file, tlb_nohash_low.S.

I moved all the definitions for that stuff out of
include/asm/tlbflush.h as they are really internal mm stuff, into
mm/mmu_decl.h

The code should have no functional changes.  I kept some variants
inline for trivial forms on things like 40x and 8xx.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21 14:21:16 +11:00
Benjamin Herrenschmidt
f048aace29 powerpc/mm: Add SMP support to no-hash TLB handling
This commit moves the whole no-hash TLB handling out of line into a
new tlb_nohash.c file, and implements some basic SMP support using
IPIs and/or broadcast tlbivax instructions.

Note that I'm using local invalidations for D->I cache coherency.

At worst, if another processor is trying to execute the same and
has the old entry in its TLB, it will just take a fault and re-do
the TLB flush locally (it won't re-do the cache flush in any case).

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21 14:21:16 +11:00
Benjamin Herrenschmidt
7c03d653cd powerpc/mm: Introduce MMU features
We're soon running out of CPU features and I need to add some new
ones for various MMU related bits, so this patch separates the MMU
features from the CPU features.  I moved over the 32-bit MMU related
ones, added base features for MMU type families, but didn't move
over any 64-bit only feature yet.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21 14:21:16 +11:00
Benjamin Herrenschmidt
5e696617c4 powerpc/mm: Split mmu_context handling
This splits the mmu_context handling between 32-bit hash based
processors, 64-bit hash based processors and everybody else.  This is
preliminary work for adding SMP support for BookE processors.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21 14:21:15 +11:00
Benjamin Herrenschmidt
6d2170be45 powerpc/4xx: Extended DCR support v2
This adds supports to the "extended" DCR addressing via the indirect
mfdcrx/mtdcrx instructions supported by some 4xx cores (440H6 and
later).

I enabled the feature for now only on AMCC 460 chips.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21 14:21:15 +11:00
Nathan Lynch
13ba3c0092 powerpc: Convert sysfs cache code to of_find_next_cache_node()
Using the common code means that more complete cache information will
provided in sysfs on platforms that don't use the l2-cache property
convention.

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21 14:21:14 +11:00
Nathan Lynch
b2ea25b958 powerpc: Convert cpu_to_l2cache() to of_find_next_cache_node()
The smp code uses cache information to populate cpu_core_map; change
it to use common code for cache lookup.

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21 14:21:14 +11:00
Nathan Lynch
e523f723d6 powerpc: Add of_find_next_cache_node()
We have more than one piece of code that looks up cache nodes manually
using the "l2-cache" property.  Add a common helper routine which does
this and handles ePAPR's "next-level-cache" property as well as
powermac.

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21 14:21:14 +11:00
Ingo Molnar
30cd324e97 Merge branches 'tracing/ftrace', 'tracing/ring-buffer' and 'tracing/urgent' into tracing/core
Conflicts:
	include/linux/ftrace.h
2008-12-19 09:42:40 +01:00
Ingo Molnar
b9974dc6bd Merge branch 'linus' into cpus4096 2008-12-18 11:48:30 +01:00
Paul Mackerras
c280266a32 Merge branch 'linux-2.6' into next 2008-12-18 11:06:12 +11:00
Dave Liu
28707af01b powerpc/fsl-booke: Fix the miss interrupt restore
The commit e5e774d883
powerpc/fsl-booke: Fix problem with _tlbil_va being interrupted
introduce one issue. that casue the problem like this:

Kernel BUG at c00b19fc [verbose debug info unavailable]
Oops: Exception in kernel mode, sig: 5 [#1]
MPC8572 DS
Modules linked in:
NIP: c00b19fc LR: c00b1c34 CTR: c0064e88
REGS: ef02b7b0 TRAP: 0700   Not tainted  (2.6.28-rc8-00057-g1bda712)
MSR: 00021000 <ME>  CR: 44048028  XER: 20000000
TASK = ef02c000[1] 'init' THREAD: ef02a000
GPR00: 00000001 ef02b860 ef02c000 eec201a0 c0dec2c0 00000000 000078a1 00000400
GPR08: c00b4e40 000078a1 c048ec00 a1780000 44048028 ecd26917 00000001 ef02b948
GPR16: ffffffea 0000020c 00000000 00000000 00000003 0000000a 00000000 000078a1
GPR24: eec201a0 00000000 ed849000 00000400 ef02b95c 00000001 ef02b978 ef02b984
NIP [c00b19fc] __find_get_block+0x24/0x238
LR [c00b1c34] __getblk+0x24/0x2a0
Call Trace:
[ef02b860] [c017b768] generic_make_request+0x290/0x328 (unreliable)
[ef02b8b0] [c00b1c34] __getblk+0x24/0x2a0
[ef02b910] [c00b4ae4] __bread+0x14/0xf8
[ef02b920] [c00fc228] ext2_get_branch+0xf0/0x138
[ef02b940] [c00fcc88] ext2_get_block+0xb8/0x828
[ef02ba00] [c00bbdc8] do_mpage_readpage+0x188/0x808
[ef02bac0] [c00bc5b4] mpage_readpages+0xec/0x144
[ef02bb50] [c00fba38] ext2_readpages+0x24/0x34
[ef02bb60] [c006ade0] __do_page_cache_readahead+0x150/0x230
[ef02bbb0] [c0064bdc] filemap_fault+0x31c/0x3e0
[ef02bbf0] [c00728b8] __do_fault+0x60/0x5b0
[ef02bc50] [c0011e0c] do_page_fault+0x2d8/0x4c4
[ef02bd10] [c000ed90] handle_page_fault+0xc/0x80
[ef02bdd0] [c00c7adc] set_brk+0x74/0x9c
[ef02bdf0] [c00c9274] load_elf_binary+0x70c/0x1180
[ef02be70] [c00945f0] search_binary_handler+0xa8/0x274
[ef02bea0] [c0095818] do_execve+0x19c/0x1d4
[ef02bed0] [c000766c] sys_execve+0x58/0x84
[ef02bef0] [c000e950] ret_from_syscall+0x0/0x3c
[ef02bfb0] [c009c6fc] sys_dup+0x24/0x6c
[ef02bfc0] [c0001e04] init_post+0xb0/0xf0
[ef02bfd0] [c046c1ac] kernel_init+0xcc/0xf4
[ef02bff0] [c000e6d0] kernel_thread+0x4c/0x68
Instruction dump:
4bffffa4 813f000c 4bffffac 9421ffb0 7c0802a6 7d800026 90010054 bf210034
91810030 7c0000a6 68008000 54008ffe <0f000000> 3d20c04e 3b29ffb8 38000008

The issue was the beqlr returns early but we haven't reenabled interrupts.

Signed-off-by: Dave Liu <daveliu@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-17 10:06:13 -06:00
Ingo Molnar
1f3f424a6b Merge branch 'linus' into cpus4096 2008-12-17 13:07:48 +01:00
Kay Sievers
aab0d375e0 powerpc: struct device - replace bus_id with dev_name(), dev_set_name()
Acked-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-16 15:53:38 +11:00
Nathan Lynch
edc72ac4a0 powerpc/pseries: Check for GIQ indicator before calling set-indicator
Since "Factor out cpu joining/unjoining the GIQ"
(b4963255ad) the WARN_ON in
xics_set_cpu_giq() is being triggered during boot on JS20 because the
GIQ indicator is not available on that platform.  While the warning is
harmless and the system runs normally, it's nicer to check for the
existence of the indicator before trying to manipulate it.

Implement rtas_indicator_present(), which searches the
/rtas/rtas-indicators property for the given indicator token, and use
this function in xics_set_cpu_giq().

Also use a WARN statement in xics_set_cpu_giq to get better
information on failure.

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Acked-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-16 15:53:13 +11:00
Nathan Lynch
13a9801eb6 powerpc: Move smp_hw_index to 32-bit code
smp_hw_index isn't used on 64-bit, so move it from smp.c to
setup_32.c.

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-16 15:53:05 +11:00
Anton Vorontsov
6b82b3e4b5 powerpc: Remove `have_of' global variable
The `have_of' variable is a relic from the arch/ppc time, it isn't
useful nowadays.

Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-16 15:52:57 +11:00
Paul Mackerras
1e1c568d6c Merge branch 'merge' into next 2008-12-16 14:38:58 +11:00
Kumar Gala
e5e774d883 powerpc/fsl-booke: Fix problem with _tlbil_va being interrupted
An example calling sequence which we did see:

copy_user_highpage -> kmap_atomic -> flush_tlb_page -> _tlbil_va

We got interrupted after setting up the MAS registers before the
tlbwe and the interrupt handler that caused the interrupt also did
a kmap_atomic (ide code) and thus on returning from the interrupt
the MAS registers no longer contained the proper values.

Since we dont save/restore MAS registers for normal interrupts we
need to disable interrupts in _tlbil_va to ensure atomicity.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-13 17:02:47 -06:00
Rusty Russell
968ea6d80e Merge ../linux-2.6-x86
Conflicts:

	arch/x86/kernel/io_apic.c
	kernel/sched.c
	kernel/sched_stats.h
2008-12-13 21:55:51 +10:30
Rusty Russell
320ab2b0b1 cpumask: convert struct clock_event_device to cpumask pointers.
Impact: change calling convention of existing clock_event APIs

struct clock_event_timer's cpumask field gets changed to take pointer,
as does the ->broadcast function.

Another single-patch change.  For safety, we BUG_ON() in
clockevents_register_device() if it's not set.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Ingo Molnar <mingo@elte.hu>
2008-12-13 21:20:26 +10:30
Rusty Russell
0de26520c7 cpumask: make irq_set_affinity() take a const struct cpumask
Impact: change existing irq_chip API

Not much point with gentle transition here: the struct irq_chip's
setaffinity method signature needs to change.

Fortunately, not widely used code, but hits a few architectures.

Note: In irq_select_affinity() I save a temporary in by mangling
irq_desc[irq].affinity directly.  Ingo, does this break anything?

(Folded in fix from KOSAKI Motohiro)

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Mike Travis <travis@sgi.com>
Reviewed-by: Grant Grundler <grundler@parisc-linux.org>
Acked-by: Ingo Molnar <mingo@redhat.com>
Cc: ralf@linux-mips.org
Cc: grundler@parisc-linux.org
Cc: jeremy@xensource.com
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
2008-12-13 21:20:26 +10:30
Rusty Russell
98a79d6a50 cpumask: centralize cpu_online_map and cpu_possible_map
Impact: cleanup

Each SMP arch defines these themselves.  Move them to a central
location.

Twists:
1) Some archs (m32, parisc, s390) set possible_map to all 1, so we add a
   CONFIG_INIT_ALL_POSSIBLE for this rather than break them.

2) mips and sparc32 '#define cpu_possible_map phys_cpu_present_map'.
   Those archs simply have phys_cpu_present_map replaced everywhere.

3) Alpha defined cpu_possible_map to cpu_present_map; this is tricky
   so I just manipulate them both in sync.

4) IA64, cris and m32r have gratuitous 'extern cpumask_t cpu_possible_map'
   declarations.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Reviewed-by: Grant Grundler <grundler@parisc-linux.org>
Tested-by: Tony Luck <tony.luck@intel.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: Mike Travis <travis@sgi.com>
Cc: ink@jurassic.park.msu.ru
Cc: rmk@arm.linux.org.uk
Cc: starvik@axis.com
Cc: tony.luck@intel.com
Cc: takata@linux-m32r.org
Cc: ralf@linux-mips.org
Cc: grundler@parisc-linux.org
Cc: paulus@samba.org
Cc: schwidefsky@de.ibm.com
Cc: lethal@linux-sh.org
Cc: wli@holomorphy.com
Cc: davem@davemloft.net
Cc: jdike@addtoit.com
Cc: mingo@redhat.com
2008-12-13 21:19:41 +10:30
Ingo Molnar
45ab6b0c76 Merge branch 'sched/core' into cpus4096
Conflicts:
	include/linux/ftrace.h
	kernel/sched.c
2008-12-12 13:48:57 +01:00
Grant Likely
640d17d60e powerpc/virtex5: Fix Virtex5 machine check handling
The 440x5 core in the Virtex5 uses the 440A type machine check
(ie, they have MCSRR0/MCSRR1). They thus need to call the
appropriate fixup function to hook the right variant of the
exception.

Without this, all machine checks become fatal due to loss
of context when entering the exception handler.

Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2008-12-05 14:34:26 -05:00
Ingo Molnar
970987beb9 Merge branches 'tracing/ftrace', 'tracing/function-graph-tracer' and 'tracing/urgent' into tracing/core 2008-12-05 14:45:22 +01:00
Ingo Molnar
b8307db247 Merge commit 'v2.6.28-rc7' into tracing/core 2008-12-04 09:07:19 +01:00
Kumar Gala
d5b26db2cf powerpc/85xx: Add support for SMP initialization
Added 85xx specifc smp_ops structure.  We use ePAPR style boot release
and the MPIC for IPIs at this point.

Additionally added routines for secondary cpu entry and initializtion.

Signed-off-by: Andy Fleming <afleming@freescale.com>
Signed-off-by: Trent Piepho <tpiepho@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-03 08:19:20 -06:00
Kumar Gala
06b90969a7 powerpc/85xx: minor head_fsl_booke.S cleanup
Removed unused branch labels

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-03 08:19:19 -06:00
Trent Piepho
b389889535 powerpc: Better setup of boot page TLB entry
The initial TLB mapping for the kernel boot didn't set the memory coherent
attribute, MAS2[M], in SMP mode.

If this code supported booting a secondary processor, which it doesn't yet,
but if it did, then when a secondary processor boots, it would probably signal
the primary processor by setting a variable called something like
__secondary_hold_acknowledge.  However, due to the lack of the M bit, the
primary processor would not snoop the transaction (even if a transaction were
broadcast).  If primary CPU's L1 D-cache had a copy, it would not be flushed
and the CPU would never see the ack.  Which would have resulted in the primary
CPU spinning for a long time, perhaps a full second before it gives up, while
it would have waited for the ack from the secondary CPU that it wouldn't have
been able to see because of the stale cache.

The value of MAS2 for the boot page TLB1 entry is a compile time constant,
so there is no need to calculate it in powerpc assembly language.

Also, from the MPC8572 manual section 6.12.5.3, "Bits that represent
offsets within a page are ignored and should be cleared." Existing code
didn't clear them, this code does.

The same when the page of KERNELBASE is found; we don't need to use asm to
mask the lower 12 bits off.

In the code that computes the address to rfi from, don't hard code the
offset to 24 bytes, but have the assembler figure that out for us.

Signed-off-by: Trent Piepho <tpiepho@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-03 08:19:18 -06:00
Liu Yu
6a800f36ac powerpc: Add SPE/EFP math emulation for E500v1/v2 processors.
This patch add the handlers of SPE/EFP exceptions.
The code is used to emulate float point arithmetic,
when MSR(SPE) is enabled and receive EFP data interrupt or EFP round interrupt.

This patch has no conflict with or dependence on FP math-emu.

The code has been tested by TestFloat.

Now the code doesn't support SPE/EFP instructions emulation
(it won't be called when receive program interrupt),
but it could be easily added.

Signed-off-by: Liu Yu <yu.liu@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-03 08:19:16 -06:00
Sebastien Dugue
6358d6cb32 powerpc/ibmebus: Get rid of the IRQ mapping in ibmebus_free_irq()
ibmebus_free_irq() frees the IRQ but does not remove its mapping, which
results in stale entries in the map.

This fixes it by adding a call to irq_dispose_mapping() in
ibmebus_free_irq().

Signed-off-by: Sebastien Dugue <sebastien.dugue@bull.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-03 20:46:36 +11:00
Julia Lawall
786b32f892 powerpc: Eliminate NULL test and memset after alloc_bootmem
As noted by Akinobu Mita in commit b1fceac2 ("x86: remove unnecessary
memset and NULL check after alloc_bootmem()"), alloc_bootmem and
related functions never return NULL and always return a zeroed region
of memory.  Thus a NULL test or memset after calls to these functions
is unnecessary.

This was fixed using the following semantic patch.
(http://www.emn.fr/x-info/coccinelle/)

// <smpl>
@@
expression E;
statement S;
@@

E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...)
... when != E
(
- BUG_ON (E == NULL);
|
- if (E == NULL) S
)

@@
expression E,E1;
@@

E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...)
... when != E
- memset(E,0,E1);
// </smpl>

Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-03 20:46:36 +11:00
Becky Bruce
15e09c0eca powerpc: Add sync_*_for_* to dma_ops
We need to swap these out once we start using swiotlb, so add
them to dma_ops.  Create CONFIG_PPC_NEED_DMA_SYNC_OPS Kconfig
option; this is currently enabled automatically if we're
CONFIG_NOT_COHERENT_CACHE.  In the future, this will also
be enabled for builds that need swiotlb.  If PPC_NEED_DMA_SYNC_OPS
is not defined, the dma_sync_*_for_* ops compile to nothing.
Otherwise, they access the dma_ops pointers for the sync ops.

This patch also changes dma_sync_single_range_* to actually
sync the range - previously it was using a generous
dma_sync_single.  dma_sync_single_* is now implemented
as a dma_sync_single_range with an offset of 0.

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-03 20:46:36 +11:00
Johannes Berg
c4d04be11f powerpc: Allow the max stack trace depth to be configured
On my screen, when something crashes, I only have space for maybe 16
functions of the stack trace before the information above it scrolls
off the screen.  It's easy to hack the kernel to print out only that
much, but it's harder to remember to do it.  This introduces a config
option for it so that I can keep the setting in my config.

Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-03 20:46:35 +11:00
Kumar Gala
1b98326b91 powerpc: Add MSR[CE, DE] to the MSR bits we print on show_regs()
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-03 20:46:35 +11:00
Paul Mackerras
5274918855 Merge branch 'merge' 2008-12-03 20:11:06 +11:00
Benjamin Herrenschmidt
2434bbb30e powerpc: Fix dma_map_sg() cache flushing on non coherent platforms
On PowerPC 4xx or other non cache-coherent platforms, we lost the
appropriate cache flushing in dma_map_sg() when merging the 32 and
64-bit DMA code (commit 4fc665b88a,
"powerpc: Merge 32 and 64-bit dma code").  This restores it.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Becky Bruce <beckyb@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-03 18:24:08 +11:00
Milton Miller
a1e0eb1042 powerpc: Fix build for 32-bit SMP configs
attr_smt_snooze_delay is only defined for CONFIG_PPC64, so protect the
attribute removal with the same condition.  This fixes this build error
on 32-bit SMP configurations:

/data/home/miltonm/next.git/arch/powerpc/kernel/sysfs.c: In function ‘unregister_cpu_online’:
/data/home/miltonm/next.git/arch/powerpc/kernel/sysfs.c:722: error: ‘attr_smt_snooze_delay’ undeclared (first use in this function)
/data/home/miltonm/next.git/arch/powerpc/kernel/sysfs.c:722: error: (Each undeclared identifier is reported only once
/data/home/miltonm/next.git/arch/powerpc/kernel/sysfs.c:722: error: for each function it appears in.)

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-01 13:28:19 +11:00
Paul Mackerras
ab598b6680 powerpc: Fix system calls on Cell entered with XER.SO=1
It turns out that on Cell, on a kernel with CONFIG_VIRT_CPU_ACCOUNTING
= y, if a program sets the SO (summary overflow) bit in the XER and
then does a system call, the SO bit in CR0 will be set on return
regardless of whether the system call detected an error.  Since CR0.SO
is used as the error indication from the system call, this means that
all system calls appear to fail.

The reason is that the workaround for the timebase bug on Cell uses a
compare instruction.  With CONFIG_VIRT_CPU_ACCOUNTING = y, the
ACCOUNT_CPU_USER_ENTRY macro reads the timebase, so we end up doing a
compare instruction, which copies XER.SO to CR0.SO.  Since we were
doing this in the system call entry patch after clearing CR0.SO but
before saving the CR, this meant that the saved CR image had CR0.SO
set if XER.SO was set on entry.

This fixes it by moving the clearing of CR0.SO to after the
ACCOUNT_CPU_USER_ENTRY call in the system call entry path.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-12-01 09:40:19 +11:00
Adhemerval Zanella
4b824de9b1 powerpc: Fix IRQ assignment for some PCIe devices
Currently, some PCIe devices on POWER6 machines do not get interrupts
assigned correctly.  The problem is that OF doesn't create an
"interrupt" property for them.  The fix is for of_irq_map_pci to fall
back to using the value in the PCI interrupt-pin register in config
space, as we do when there is no OF device-tree node for the device.

I have verified that this works fine with a pair of Squib-E SAS
adapter on a P6-570.

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-01 09:40:18 +11:00
Steven Rostedt
f1eecf0e4f powerpc/ppc32: static ftrace fixes for PPC32
Impact: fix for PowerPC 32 code

There were some early init code that was not safe for static
ftrace to boot on my PowerBook. This code must only use relative
addressing, and static mcount performs a compare of the
ftrace_trace_function pointer, and gets that with an absolute address.
In the early init boot up code, this will cause a fault.

This patch removes tracing from the files containing the offending
functions.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-28 14:08:07 +01:00
Steven Rostedt
0029ff8752 powerpc: ftrace, use create_branch
Impact: clean up

Paul Mackerras pointed out that the code to determine if the branch
can reach the destination is incorrect. Michael Ellerman suggested
to pull out the code from create_branch and use that.

Simply using create_branch is probably the best.

Reported-by: Michael Ellerman <michael@ellerman.id.au>
Reported-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-28 14:08:01 +01:00
Steven Rostedt
ec682cef2d powerpc: ftrace, added missing icache flush
Impact: fix to PowerPC code modification

After modifying code it is essential to flush the icache. This patch
adds the missing flush.

Reported-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-28 14:07:56 +01:00
Steven Rostedt
d9af12b72b powerpc: ftrace, fix cast aliasing and add code verification
Impact: clean up and robustness addition

This patch addresses the comments made by Paul Mackerras.
It removes the type casting between unsigned int and unsigned char
pointers, and replaces them with a use of all unsigned int.

Verification that the jump is indeed made to a trampoline has also
been added.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-28 14:07:50 +01:00
Steven Rostedt
c7b0d17366 powerpc: ftrace, do nothing in mcount call for dyn ftrace
Impact: quicken mcount calls that are not replaced by dyn ftrace

Dynamic ftrace no longer does on the fly recording of mcount locations.
The mcount locations are now found at compile time. The mcount
function no longer needs to store registers and call a stub function.
It can now just simply return.

Since there are some functions that do not get converted to a nop
(.init sections and other code that may disappear), this patch should
help speed up that code.

Also, the stub for mcount on PowerPC 32 can not be a simple branch
link register like it is on PowerPC 64. According to the ABI specification:

"The _mcount routine is required to restore the link register from
 the stack so that the profiling code can be inserted transparently,
 whether or not the profiled function saves the link register itself."

This means that we must restore the link register that was used
to make the call to mcount.  The minimal mcount function for PPC32
ends up being:

 mcount:
        mflr    r0
        mtctr   r0
        lwz     r0, 4(r1)
        mtlr    r0
        bctr

Where we move the link register used to call mcount into the
ctr register, and then restore the link register from the stack.
Then we use the ctr register to jump back to the mcount caller.
The r0 register is free for us to use.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-28 14:07:45 +01:00
Steven Rostedt
7cc45e6432 powerpc/ppc32: ftrace, dynamic ftrace to handle modules
Impact: add ability to trace modules on 32 bit PowerPC

This patch performs the necessary trampoline calls to handle
modules with dynamic ftrace on 32 bit PowerPC.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2008-11-20 10:52:53 -08:00
Steven Rostedt
f48cb8b48b powerpc/ppc64: ftrace, handle module trampolines for dyn ftrace
Impact: Allow 64 bit PowerPC to trace modules with dynamic ftrace

This adds code to handle the PPC64 module trampolines, and allows for
PPC64 to use dynamic ftrace.

Thanks to Paul Mackerras for these updates:

  - fix the mod and rec->arch.mod NULL checks.
  - fix to is_bl_op compare.

Thanks to Milton Miller for:

  - finding the nasty race with using two nops, and recommending
    instead that I use a branch 8 forward.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2008-11-20 10:52:28 -08:00
Steven Rostedt
e4486fe316 powerpc: ftrace, use probe_kernel API to modify code
Impact: use cleaner probe_kernel API over assembly

Using probe_kernel_read/write interface is a much cleaner approach
than the current assembly version.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2008-11-20 10:52:04 -08:00
Steven Rostedt
8fd6e5a8c8 powerpc: ftrace, convert to new dynamic ftrace arch API
Impact: update to PowerPC ftrace arch API

This patch converts PowerPC to use the new dynamic ftrace arch API.

Thanks to Paul Mackennas for pointing out the mistakes of my original
test_24bit_addr function.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2008-11-20 10:51:40 -08:00
Steven Rostedt
6d07bb4735 powerpc: ftrace, do not latency trace idle
Impact: fix for irq off latency tracer

When idle is called, interrupts are disabled, but the idle function
will still wake up on an interrupt. The problem is that the interrupt
disabled latency tracer will take this call to idle as a latency.

This patch disables the latency tracing when going into idle.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2008-11-20 10:51:15 -08:00
Milton Miller
25ddd738c2 powerpc: Provide a separate handler for each IPI action
With the new generic smp call function helpers, I noticed the code in
smp_message_recv was a single function call in many cases.  While
getting the message number from the ipi data is easy, we can reduce
the path length by a function and data-dependent switch by registering
seperate IPI actions for these simple calls.

Originally I left the ipi action array exposed, but then I realized the
registration code should be common too.

The three users each had their own name array, so I made a fourth
to convert all users to use a common one.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-19 16:05:06 +11:00
Michael Ellerman
54018178ef powerpc: Use for_each_node_with_property() in of_irq_map_init()
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-19 16:05:01 +11:00
Benjamin Herrenschmidt
6612d9b0b8 powerpc/44x: Fix 460EX/460GT machine check handling
Those cores use the 440A type machine check (ie, they have
MCSRR0/MCSRR1). They thus need to call the appropriate fixup
function to hook the right variant of the exception.

Without this, all machine checks become fatal due to loss
of context when entering the exception handler.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2008-11-13 10:11:26 -05:00
Paul Mackerras
486936cd93 Merge branch 'linux-2.6' into next 2008-11-12 08:43:22 +11:00
Andreas Schwab
77eb50aefa powerpc: Fix msr check in compat_sys_swapcontext
The new context may not be 16-byte aligned, so the real address of the
mcontext structure should be read from the uc_regs pointer instead of
directly using the (unaligned) uc_mcontext field.

Signed-off-by: Andreas Schwab <schwab@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-11 19:42:22 +11:00
Kumar Gala
b41d6fee37 powerpc/fsl-booke: Fix synchronization bug w/local tlb invalidates
The implemetation of _tlbil_pid() on Freescale Book-E cores needs
an msync & isync after we flash invalidate the TLBs.  This was causing
the following oops reported by Sebastian Andrzej Siewior:

  VFS: Mounted root (nfs filesystem) readonly.
  Freeing unused kernel memory: 148k init
  BUG: sleeping function called from invalid context at /home/bigeasy/git/linux-2.6-powerpc/mm/mmap.c:234
  in_atomic():1, irqs_disabled():0
  Call Trace:
  [df189df0] [c0007160] show_stack+0x48/0x148 (unreliable)
  [df189e30] [c0029480] __might_sleep+0xf0/0x100
  [df189e40] [c0070ac0] remove_vma+0x28/0x98
  [df189e50] [c0070c1c] exit_mmap+0xec/0x128
  [df189e80] [c002d2f4] mmput+0x54/0xec
  [df189ea0] [c0030b6c] exit_mm+0x10c/0x120
  [df189ed0] [c003288c] do_exit+0x1ac/0x6e8
  [df189f20] [c0032e48] do_group_exit+0x80/0xac
  [df189f40] [c000e9dc] ret_from_syscall+0x0/0x3c
  BUG: scheduling while atomic: udevd/956/0x10000002
  Modules linked in:
  Call Trace:
  [df189df0] [c0007160] show_stack+0x48/0x148 (unreliable)
  [df189e30] [c002ac88] __schedule_bug+0x58/0x6c
  [df189e40] [c023e6cc] schedule+0xa8/0x4a8
  [df189e90] [c002ad6c] __cond_resched+0x38/0x64
  [df189ea0] [c023ebc8] _cond_resched+0x3c/0x58
  [df189eb0] [c0030e70] put_files_struct+0x90/0xec
  [df189ed0] [c00328a8] do_exit+0x1c8/0x6e8
  [df189f20] [c0032e48] do_group_exit+0x80/0xac
  [df189f40] [c000e9dc] ret_from_syscall+0x0/0x3c

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-11-08 12:38:55 -06:00
Paul Mackerras
3cc698789a powerpc: Eliminate unused do_gtod variable
Since we started using the generic timekeeping code, we haven't had a
powerpc-specific version of do_gettimeofday, and hence there is now
nothing that reads the do_gtod variable in arch/powerpc/kernel/time.c.
This therefore removes it and the code that sets it.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-06 09:49:28 +11:00
Paul Mackerras
597bc5c00b powerpc: Improve resolution of VDSO clock_gettime
Currently the clock_gettime implementation in the VDSO produces a
result with microsecond resolution for the cases that are handled
without a system call, i.e. CLOCK_REALTIME and CLOCK_MONOTONIC.  The
nanoseconds field of the result is obtained by computing a
microseconds value and multiplying by 1000.

This changes the code in the VDSO to do the computation for
clock_gettime with nanosecond resolution.  That means that the
resolution of the result will ultimately depend on the timebase
frequency.

Because the timestamp in the VDSO datapage (stamp_xsec, the real time
corresponding to the timebase count in tb_orig_stamp) is in units of
2^-20 seconds, it doesn't have sufficient resolution for computing a
result with nanosecond resolution.  Therefore this adds a copy of
xtime to the VDSO datapage and updates it in update_gtod() along with
the other time-related fields.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-06 09:49:22 +11:00
Benjamin Herrenschmidt
7eef440a54 powerpc/pci: Cosmetic cleanups of pci-common.c
This does a few cosmetic cleanups, moving a couple of things around
but without actually changing what the code does.

(There is a minor change in ordering of operations in
pcibios_setup_bus_devices but it should have no impact).

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-06 09:41:52 +11:00
Benjamin Herrenschmidt
fd6852c8fa powerpc/pci: Fix various pseries PCI hotplug issues
The pseries PCI hotplug code has a number of issues, ranging from
incorrect resource setup to crashes, depending on what is added,
when, whether it contains a bridge, etc etc....

This fixes a whole bunch of these, while actually simplifying the code
a bit, using more generic code in the process and factoring out common
code between adding of a PHB, a slot or a device.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-06 09:31:52 +11:00
Benjamin Herrenschmidt
b5ae5f911d powerpc/pci: Make pcibios_allocate_bus_resources more robust
To properly fix PCI hotplug, it's useful to be able to make the fixup
passes on all devices whether they were just hot plugged or already
there.

However, pcibios_allocate_bus_resources() wouldn't cope well with
being called twice for a given bus.  This makes it ignore resources
that have already been allocated, along with adding a bit of debug
output.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-06 09:26:05 +11:00
Benjamin Herrenschmidt
8b8da35804 powerpc/pci: Split pcibios_fixup_bus() into bus setup and device setup
Currently, our PCI code uses the pcibios_fixup_bus() callback, which
is called by the generic code when probing PCI buses, for two
different things.

One is to set up things related to the bus itself, such as reading
bridge resources for P2P bridges, fixing them up, or setting up the
iommu's associated with bridges on some platforms.

The other is some setup for each individual device under that bridge,
mostly setting up DMA mappings and interrupts.

The problem is that this approach doesn't work well with PCI hotplug
when an existing bus is re-probed for new children.  We fix this
problem by splitting pcibios_fixup_bus into two routines:

	pcibios_setup_bus_self() is now called to setup the bus itself

	pcibios_setup_bus_devices() is now called to setup devices

pcibios_fixup_bus() is then modified to call these two after reading the
bridge bases, and the OF based PCI probe is modified to avoid calling
into the first one when rescanning an existing bridge.

[paulus@samba.org - fixed eeh.h for 32-bit compile now that pci-common.c
is including it unconditionally.]

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-06 09:22:37 +11:00
Benjamin Herrenschmidt
ab56ced9c5 powerpc/pci: Remove pcibios_do_bus_setup()
The function pcibios_do_bus_setup() was used by pcibios_fixup_bus()
to perform setup that is different between the 32-bit and 64-bit
code.  This difference no longer exists, thus the function is removed
and the setup now done directly from pci-common.c.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-05 22:11:53 +11:00
Benjamin Herrenschmidt
5328032335 powerpc/pci: Use common PHB resource hookup
The 32-bit and 64-bit powerpc PCI code used to set up the resource
pointers of the root bus of a given PHB in completely different
places.

This unifies this in large part, by making 32-bit use a routine very
similar to what 64-bit does when initially scanning the PCI busses.

The actual setup of the PHB resources itself is then moved to a
common function in pci-common.c.

This should cause no functional change on 64-bit.  On 32-bit, the
effect is that the PHB resources are going to be setup a bit earlier,
instead of being setup from pcibios_fixup_bus().

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-05 22:11:53 +11:00
Benjamin Herrenschmidt
b0494bc8ee powerpc/pci: Cleanup debug printk's
This removes the various DBG() macro from the powerpc PCI code and
makes it use the standard pr_debug instead.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-05 22:11:53 +11:00
Brian King
409001948d powerpc: Update page-in counter for CMM
A new field has been added to the VPA as a method for the client OS to
communicate to firmware the number of page-ins it is performing when
running collaborative memory overcommit.  The hypervisor will use this
information to better determine if a partition is experiencing memory
pressure and needs more memory allocated to it.

Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-05 22:08:28 +11:00
Benjamin Herrenschmidt
a6a8e009b1 powerpc: Silence software timebase sync
When no hardware method is provided to sync the timebase registers
across the machine, and the platform doesn't sync them for us, then we
use a generic software implementation.  Currently, the code for that
has many printks, and they don't have log levels.  Most of the printks
are only useful for debugging the code, and since we haven't had any
problems with it for years, this turns them into pr_debug.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-05 22:08:28 +11:00
Benjamin Herrenschmidt
1fd0f52583 powerpc: Fix domain numbers in /proc on 64-bit
The code to properly expose domain numbers in /proc is somewhat
bogus on ppc64 as it depends on the "buid" field being non-0,
but that field is really pseries specific.

This removes that code and makes ppc64 use the same code as 32-bit
which effectively decides whether to expose domains based on
ppc_pci_flags set by the platform, and sets the default for 64-bit
to enable domains and enable compatibility for domain 0 (which
strips the domain number for domain 0 to help with X servers).

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-05 22:08:27 +11:00
Linus Torvalds
f891caf28f Merge branch 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (23 commits)
  Revert "powerpc: Sync RPA note in zImage with kernel's RPA note"
  powerpc: Fix compile errors with CONFIG_BUG=n
  powerpc: Fix format string warning in arch/powerpc/boot/main.c
  powerpc: Fix bug in kernel copy of libfdt's fdt_subnode_offset_namelen()
  powerpc: Remove duplicate DMA entry from mpc8313erdb device tree
  powerpc/cell/OProfile: Fix on-stack array size in activate spu profiling function
  powerpc/mpic: Fix regression caused by change of default IRQ affinity
  powerpc: Update remaining dma_mapping_ops to use map/unmap_page
  powerpc/pci: Fix unmapping of IO space on 64-bit
  powerpc/pci: Properly allocate bus resources for hotplug PHBs
  OF-device: Don't overwrite numa_node in device registration
  powerpc: Fix swapcontext system for VSX + old ucontext size
  powerpc: Fix compiler warning for the relocatable kernel
  powerpc: Work around ld bug in older binutils
  powerpc/ppc64/kdump: Better flag for running relocatable
  powerpc: Use is_kdump_kernel()
  powerpc: Kexec exit should not use magic numbers
  powerpc/44x: Update 44x defconfigs
  powerpc/40x: Update 40x defconfigs
  powerpc: enable heap randomization for linkstations
  ...
2008-10-31 08:14:15 -07:00
Paul Mackerras
5663a1232b Revert "powerpc: Sync RPA note in zImage with kernel's RPA note"
This reverts commit 91a0030295, plus
commit 0dcd440120 ("powerpc: Revert CHRP
boot wrapper to real-base = 12MB on 32-bit") which depended on it.

Commit 91a00302 was causing NVRAM corruption on some pSeries machines,
for as-yet unknown reasons, so this reverts it until the cause is
identified.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 22:36:21 +11:00
Mark Nelson
f9226d572d powerpc: Update remaining dma_mapping_ops to use map/unmap_page
After the merge of the 32 and 64bit DMA code, dma_direct_ops lost
their map/unmap_single() functions but gained map/unmap_page().  This
caused a problem for Cell because Cell's dma_iommu_fixed_ops called
the dma_direct_ops if the fixed linear mapping was to be used or the
iommu ops if the dynamic window was to be used.  So in order to fix
this problem we need to update the 64bit DMA code to use
map/unmap_page.

First, we update the generic IOMMU code so that iommu_map_single()
becomes iommu_map_page() and iommu_unmap_single() becomes
iommu_unmap_page().  Then we propagate these changes up through all
the callers of these two functions and in the process update all the
dma_mapping_ops so that they have map/unmap_page rahter than
map/unmap_single.  We can do this because on 64bit there is no HIGHMEM
memory so map/unmap_page ends up performing exactly the same function
as map/unmap_single, just taking different arguments.

This has no affect on drivers because the dma_map_single_attrs() just
ends up calling the map_page() function of the appropriate
dma_mapping_ops and similarly the dma_unmap_single_attrs() calls
unmap_page().

This fixes an oops on Cell blades, which oops on boot without this
because they call dma_direct_ops.map_single, which is NULL.

Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:13:48 +11:00
Benjamin Herrenschmidt
b30115ea8f powerpc/pci: Fix unmapping of IO space on 64-bit
A typo/thinko made us pass the wrong argument to __flush_hash_table_range
when unplugging bridges, thus not flushing all the translations for
the IO space on unplug.  The third parameter to __flush_hash_table_range
is `end', not `size'.

This causes the hypervisor to refuse unplugging slots.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:13:46 +11:00
Nathan Fontenot
e90a131846 powerpc/pci: Properly allocate bus resources for hotplug PHBs
Resources for PHB's that are dynamically added to a system are not
properly allocated in the resource tree.

Not having these resources allocated causes an oops when removing
the PHB when we try to release them.

The diff appears a bit messy, this is mainly due to moving everything
one tab to the left in the pcibios_allocate_bus_resources routine.
The functionality change in this routine is only that the
list_for_each_entry() loop is pulled out and moved to the necessary
calling routine.

Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:12:03 +11:00
Jeremy Kerr
6098e2ee14 OF-device: Don't overwrite numa_node in device registration
Currently, the numa_node of OF-devices will be overwritten during
device_register, which simply sets the node to -1.  On cell machines,
this means that devices can't find their IOMMU, which is referenced
through the device's numa node.

Set the numa node for OF devices with no parent, and use the
lower-level device_initialize and device_add functions, so that the
node is preserved.

We can remove the call to set_dev_node in of_device_alloc, as it
will be overwritten during register.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:12:01 +11:00
Michael Neuling
16c29d180b powerpc: Fix swapcontext system for VSX + old ucontext size
Since VSX support was added, we now have two sizes of ucontext_t;
the older, smaller size without the extra VSX state, and the new
larger size with the extra VSX state.  A program using the
sys_swapcontext system call and supplying smaller ucontext_t
structures will currently get an EINVAL error if the task has
used VSX (e.g. because of calling library code that uses VSX) and
the old_ctx argument is non-NULL (i.e. the program is asking for
its current context to be saved).  Thus the program will start
getting EINVAL errors on calls that previously worked.

This commit changes this behaviour so that we don't send an EINVAL in
this case.  It will now return the smaller context but the VSX MSR bit
will always be cleared to indicate that the ucontext_t doesn't include
the extra VSX state, even if the task has executed VSX instructions.

Both 32 and 64 bit cases are updated.

[paulus@samba.org - also fix some access_ok() and get_user() calls]

Thanks to Ben Herrenschmidt for noticing this problem.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:12:00 +11:00
Michael Neuling
b160544ccc powerpc: Fix compiler warning for the relocatable kernel
Fixes this warning:
 arch/powerpc/kernel/setup_64.c:447:5: warning: "kernstart_addr" is not defined

which arises because PHYSICAL_START is no longer a constant when
CONFIG_RELOCATABLE=y.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:11:54 +11:00
Paul Mackerras
2a4b9c5af8 powerpc: Work around ld bug in older binutils
Commit 549e8152de ("powerpc: Make the
64-bit kernel as a position-independent executable") added lines to
vmlinux.lds.S to add the extra sections needed to implement a
relocatable kernel.  However, those lines seem to trigger a bug in
older versions of GNU ld (such as 2.16.1) when building a
non-relocatable kernel.  Since ld 2.16.1 is still a popular choice for
cross-toolchains, this adds an #ifdef to vmlinux.lds.S so the added
lines are only included when building a relocatable kernel.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:11:52 +11:00
Milton Miller
8b8b0cc1c7 powerpc/ppc64/kdump: Better flag for running relocatable
The __kdump_flag ABI is overly constraining for future development.

As of 2.6.27, the kernel entry point has 4 constraints:  Offset 0 is
the starting point for the master (boot) cpu (entered with r3 pointing
to the device tree structure), offset 0x60 is code for the slave cpus
(entered with r3 set to their device tree physical id), offset 0x20 is
used by the iseries hypervisor, and secondary cpus must be well behaved
when the first 256 bytes are copied to address 0.

Placing the __kdump_flag at 0x18 is bad because:

- It was taking the last 8 bytes before the iseries hypervisor data.
- It was 8 bytes for a boolean flag
- It had no way of identifying that the flag was present
- It does leave any room for the master to add any additional code
  before branching, which hurts debug.
- It will be unnecessarily hard for 32 bit code to be common (8 bytes)

Now that we have eliminated the use of __kdump_flag in favor of
the standard is_kdump_kernel(), this flag only controls run without
relocating the kernel to PHYSICAL_START (0), so rename it __run_at_load.

Move the flag to 0x5c, 1 word before the secondary cpu entry point at
0x60.  Initialize it with "run0" to say it will run at 0 unless it is
set to 1.  It only exists if we are relocatable.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:11:49 +11:00
Milton Miller
62a8bd6c92 powerpc: Use is_kdump_kernel()
linux/crash_dump.h defines is_kdump_kernel() to be used by code that
needs to know if the previous kernel crashed instead of a (clean) boot
or reboot.

This updates the just added powerpc code to use it.  This is needed
for the next commit, which will remove __kdump_flag.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:11:47 +11:00
Milton Miller
1767c8f392 powerpc: Kexec exit should not use magic numbers
Commit 54622f10a6 ("powerpc: Support for
relocatable kdump kernel") added a magic flag value in a register to
tell purgatory that it should be a panic kernel.  This part is wrong
and is reverted by this commit.

The kernel gets a list of memory blocks and a entry point from user space.
Its job is to copy the blocks into place and then branch to the designated
entry point (after turning "off" the mmu).

The user space tool inserts a trampoline, called purgatory, that runs
before the user supplied code.   Its job is to establish the entry
environment for the new kernel or other application based on the contents
of memory.  The purgatory code is compiled and embedded in the tool,
where it is later patched using the elf symbol table using elf symbols.

Since the tool knows it is creating a purgatory that will run after a
kernel crash, it should just patch purgatory (or the kernel directly)
if something needs to happen.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:11:44 +11:00
Ingo Molnar
4944dd62de Merge commit 'v2.6.28-rc2' into tracing/urgent 2008-10-27 10:50:54 +01:00
Linus Torvalds
5b34653963 Merge branch 'x86/um-header' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'x86/um-header' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (26 commits)
  x86: canonicalize remaining header guards
  x86: drop double underscores from header guards
  x86: Fix ASM_X86__ header guards
  x86, um: get rid of uml-config.h
  x86, um: get rid of arch/um/Kconfig.arch
  x86, um: get rid of arch/um/os symlink
  x86, um: get rid of excessive includes of uml-config.h
  x86, um: get rid of header symlinks
  x86, um: merge Kconfig.i386 and Kconfig.x86_64
  x86, um: get rid of sysdep symlink
  x86, um: trim the junk from uml ptrace-*.h
  x86, um: take vm-flags.h to sysdep
  x86, um: get rid of uml asm/arch
  x86, um: get rid of uml highmem.h
  x86, um: get rid of uml unistd.h
  x86, um: get rid of system.h -> system.h include
  x86, um: uml atomic.h is not needed anymore
  x86, um: untangle uml ldt.h
  x86, um: get rid of more uml asm/arch uses
  x86, um: remove dead header (uml module-generic.h; never used these days)
  ...
2008-10-23 10:22:01 -07:00
Steven Rostedt
15adc04898 ftrace, powerpc, sparc64, x86: remove notrace from arch ftrace file
The entire file of ftrace.c in the arch code needs to be marked
as notrace. It is much cleaner to do this from the Makefile with
CFLAGS_REMOVE_ftrace.o.

[ powerpc already had this in its Makefile. ]

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-23 16:00:25 +02:00
Steven Rostedt
4d296c2432 ftrace: remove mcount set
The arch dependent function ftrace_mcount_set was only used by the daemon
start up code. This patch removes it.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-23 16:00:23 +02:00
Al Viro
2e074004c6 x86, um: get rid of uml signal.h
the only theoretical reason for it these days is ppc; aside of uml/ppc
being dead, do_signal() would be happier in arch/powerpc/kernel/signal.h
anyway.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-10-22 22:55:20 -07:00
Ingo Molnar
debfcaf93e Merge branch 'tracing/ftrace' into tracing/urgent 2008-10-22 09:08:14 +02:00
Mohan Kumar M
54622f10a6 powerpc: Support for relocatable kdump kernel
This adds relocatable kernel support for kdump. With this one can
use the same regular kernel to capture the kdump. A signature (0xfeed1234)
is passed in r6 from panic code to the next kernel through kexec_sequence
and purgatory code. The signature is used to differentiate between
kdump kernel and non-kdump kernels.

The purgatory code compares the signature and sets the __kdump_flag in
head_64.S.  During the boot up, kernel code checks __kdump_flag and if it
is set, the kernel will behave as relocatable kdump kernel. This kernel
will boot at the address where it was loaded by kexec-tools ie. at the
address reserved through crashkernel boot parameter.

CONFIG_CRASH_DUMP depends on CONFIG_RELOCATABLE option to build kdump
kernel as relocatable. So the same kernel can be used as production and
kdump kernel.

This patch incorporates the changes suggested by Paul Mackerras to avoid
GOT use and to avoid two copies of the code.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Mohan Kumar M <mohan@in.ibm.com>
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-22 15:01:22 +11:00
David Gibson
201bdc868d powerpc: Further compile fixup for STRICT_MM_TYPECHECKS
A patch of mine was recently committed to fix up STRICT_MM_TYPECHECKS
behaviour on powerpc (f5ea64dcba).
However, something which breaks it again seems to have slipped in
afterwards.  So, here's another small fix.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-22 11:00:26 +11:00