for_each_cpu() actually iterates across all possible CPUs. We've had mistakes
in the past where people were using for_each_cpu() where they should have been
iterating across only online or present CPUs. This is inefficient and
possibly buggy.
We're renaming for_each_cpu() to for_each_possible_cpu() to avoid this in the
future.
This patch replaces for_each_cpu with for_each_possible_cpu.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This is back again. Offending patch is x86_64-mm-hotadd-reserve.patch
arch/arm/kernel/setup.c:435: error: conflicting types for 'add_memory'
include/linux/memory_hotplug.h:102: error: previous declaration of 'add_memory' was here
arch/arm/kernel/setup.c:435: error: conflicting types for 'add_memory'
include/linux/memory_hotplug.h:102: error: previous declaration of 'add_memory' was here
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
While cleaning up parisc_ksyms.c earlier, I noticed that strpbrk wasn't
being exported from lib/string.c. Investigating further, I noticed a
changeset that removed its export and added it to _ksyms.c on a few more
architectures. The justification was that "other arches do it."
I think this is wrong, since no architecture currently defines
__HAVE_ARCH_STRPBRK, there's no reason for any of them to be exporting it
themselves. Therefore, consolidate the export to lib/string.c.
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Patch from Catalin Marinas
Glibc interprets the HWCAP bits and decides on what features to use.
However, even if the features are present in the hardware, they are not
always supported by the kernel and hence the corresponding bits have to be
cleared from the elf_hwcap variable.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Lennert Buytenhek
This patch adds support for the I/O coherent cache available on the
xsc3. The approach is to provide a simple API to determine whether the
chipset supports coherency by calling arch_is_coherent() and then
setting the appropriate system memory PTE and PMD bits. In addition,
we call this API on dma_alloc_coherent() and dma_map_single() calls.
A generic version exists that will compile out all the coherency-related
code that is not needed on the majority of ARM systems.
Note that we do not check for coherency in the dma_alloc_writecombine()
function as that still requires a special PTE setting. We also don't
touch dma_mmap_coherent() as that is a special ARM-only API that is by
definition only used on non-coherent system.
Signed-off-by: Deepak Saxena <dsaxena@plexity.net>
Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Komal Shah
This patch fixes the duplicate exports of string library functions.
Signed-off-by: Komal Shah <komal_shah802003@yahoo.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* master.kernel.org:/home/rmk/linux-2.6-arm:
[ARM] 3424/2: ixp23xx: fix uncompress.h for recent CRLF decompressor change
[ARM] 3434/1: pxa i2s amsl define
[ARM] 3425/1: xsc3: need to include pgtable-hwdef.h
[ARM] Allow un-muxed syscalls to be available for everyone
[ARM] 3420/1: Missing clobber in example code
[ARM] nommu: fixups for the exception vectors
[ARM] nommu: add nommu specific Kconfig and MMUEXT variable in Makefile
[ARM] nommu: start-up code
[ARM] nommu: MPU support in boot/compressed/head.S
The only user of get_wchan is the proc fs - and proc can't be built modular.
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Patch from Paul Brook
The example code in the source documentation for __kernel_dmb
clobbers r0 but doesn't list it the asm clobber list.
Signed-off-by: Paul Brook <paul@codesourcery.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The high page vector (0xFFFF0000) does not supported in nommu mode.
This patch allows the vectors to be 0x00000000 or the begining of DRAM
in nommu mode.
Signed-off-by: Hyok S. Choi <hyok.choi@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds nommu version start-up code head-nommu.S.
The common part of the start-up codes is moved to head-common.S.
Signed-off-by: Hyok S. Choi <hyok.choi@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
Quoting RMK:
|pte_write() just says that the page _may_ be writable. It doesn't say
|that the MMU is programmed to allow writes. If pte_dirty() doesn't
|return true, that means that the page is _not_ writable from userspace.
|If you write to it from kernel mode (without using put_user) you'll
|bypass the MMU read-only protection and may end up writing to a page
|owned by two separate processes.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The recent addition of boot_cpu_init() implements the initialisation
of the online, present and possible cpu maps for the boot CPU, so
there is no reason to duplicate this in the architecture
smp_prepare_boot_cpu() hook.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
5d25ac038a broke VFP builds due to
enable_irq not being defined as an assembly macro. Move it to
assembler.h so everyone can use it.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Only issue a "nobody cared" warning after 99900 spurious interrupts.
This avoids the occasional spurious interrupt causing warnings, as
per x86.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
This field is redundent since it must be equal to PHYS_OFFSET anyway.
There is no reference to it anymore so remove it at last.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Allow the individual coprocessor handlers to decide when to enable
interrupts, rather than unconditionally enabling them.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
asm/hardware.h is not required for the majority of processor support
files, ioremap support, mm initialisation, acorn IO support, nor
the debug code (which picks up its machine specific includes via
debug-macros.S)
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Read the processor ID at boot, and save it in "processor_id" as we
did before. Later, when we re-parse the CPU type in the setup.c code,
re-use the value stored in "processor_id".
This allows a cleaner work-around for noMMU devices without CP#15.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The sys_fork is not supported in nommu mode. The other syscalls
that is not supported in nommu mode are to be defined as cond_signal
in kernel/sys_ni.c.
Signed-off-by: Hyok S. Choi <hyok.choi@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Ben Dooks
arch/arm/kernel/setup.c declares mem_fclk_21285 when
this is already declared in include/asm-arm/system.h
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Ben Dooks
arch/arm/kernel/compat.c exports two functions,
convert_to_tag_list and squash_mem_tags which
are not defined in any header files, and not
used outside arch/arm/kernel.
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Ben Dooks
Fix the following warnings from sparse:
arch/arm/kernel/process.c:86:6: warning: symbol 'default_idle' was not declared. Should it be static?
arch/arm/kernel/process.c:378:5: warning: symbol 'dump_fpu' was not declared. Should it be static?
Include <linux/elfcore.h> for dump_fpu() decleration, and
make default_idle() static as it is not used outside the file.
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch removes the reliance of iwmmxt on hand coded alignments.
Since thread_info is always 8K aligned, specifying that fpstate is
8-byte aligned achieves the same effect without needing to resort
to hand coded alignments.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Also from Thomas Gleixner <tglx@linutronix.de>
Function next_timer_interrupt() got broken with a recent patch
6ba1b91213 as sys_nanosleep() was moved to
hrtimer. This broke things as next_timer_interrupt() did not check hrtimer
tree for next event.
Function next_timer_interrupt() is needed with dyntick (CONFIG_NO_IDLE_HZ,
VST) implementations, as the system can be in idle when next hrtimer event
was supposed to happen. At least ARM and S390 currently use
next_timer_interrupt().
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Although you could ask the kernel for panic-on-oops, it remained
non-functional because the architecture specific code fragment had
not been implemented. Add it, so it works as advertised.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
Commit 99595d0237 forgot to intercept
sys_socketcall as well.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
A change to the SMP initialisation caused the following oops:
CPU1: Booted secondary processor
CPU1: D VIPT write-back cache
CPU1: I cache: 32768 bytes, associativity 4, 32 byte lines, 256 sets
CPU1: D cache: 32768 bytes, associativity 4, 32 byte lines, 256 sets
<7>Calibrating delay loop... 83.14 BogoMIPS (lpj=415744)
<1>Unable to handle kernel NULL pointer dereference at virtual address 0000001c
...
PC is at enqueue_task+0x1c/0x64
LR is at activate_task+0xcc/0xe4
SMP initialisation now requires cpu_possible_map to be initialised in
setup_arch(). Move this from smp_prepare_cpus() to smp_init_cpus()
and call it from our setup_arch() if CONFIG_SMP is enabled.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
negative
Patch from Nicolas Pitre
The pre ARMv5 implementation can be aborted if an exception occurs in
the middle of it. Because of that, the ARMv6 implementation doesn't
re-attempt the operation on a failed strex either. Let's make this
transient nature of such a false positive more explicit in the
definition.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
The cmpxchg emulation on pre-ARMv5 relies on user code executed from a
kernel address. If the operation cannot complete atomically, it is
aborted from the usr_entry macro by clearing the Z flag. This clearing
of the Z flag is done whenever the user pc is above TASK_SIZE.
However this "pc >= TASK_SIZE" test cannot work in the non MMU case.
Worse: the current code will corrupt the Z flag on every entry to the
kernel.
Let's disable it in the non MMU case for now. Using NPTL on non MMU
targets needs to be worked out anyway.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
struct sockaddr_un loses its padding with EABI. Since the size of the
structure is used as a validation test in unix_mkname(), we need to
change the length argument to 110 whenever it is 112.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARM entry-common.S needs to know syscall table size; in itself that would
not be a problem, but there's an additional constraint - some of the
instructions using it want a constant that would be a multiple of 4.
So we have to pad syscall table with sys_ni_syscall and that's where
the trouble begins. .rept pseudo-op wants a constant expression for
number of repetitions and subtraction of two labels (before and after
syscall table) doesn't always get simplified to constant early enough
for .rept. If labels end up in different frags, we lose. And while
the frag size is large enough (slightly below 4Kb), the syscall table
is about 1/3 of that. We used to get away with that, but the recent
changes had been enough to trigger the breakage.
Proper fix is simple: have a macro (CALL(x)) to populate the table
instead of using explicit .long x and the first time we include calls.S
have it defined to .equ NR_syscalls,NR_syscalls+1. Then we can find
the proper amount of padding on the first inclusion simply by looking
at NR_syscalls at that time. And that will be constant, no matter what.
Moreover, the same trick kills the need of having an estimate of padded
NR_syscalls - it will be calculated for free at the same time.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
This is kernel provided user space code.
Since a syscall is used, it has to be updated to work with EABI.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
The signal return path consists of user code provided by the kernel.
Since a syscall is used, it has to be updated to work with EABI.
Noticed by Daniel Jacobowitz.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
This is needed by strace to properly handle the tracing of some system
calls. It could be useful for other applications as well.
Based on an earlier patch from Daniel Jacobowitz.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Daniel Jacobowitz <dan@debian.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
This patch adds the required code to support both user space ABIs at
the same time. A second syscall table is created to include legacy ABI
syscalls that need an ABI compat wrapper.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
The difference between EABI and the legacy ABI may affect either
structure member alignment and/or argument register selection.
The patch has the details.
Included are wrappers for the following syscalls:
sys_stat64
sys_lstat64
sys_fstat64
sys_fcntl64
sys_epoll_ctl
sys_epoll_wait
sys_ipc
sys_semop
sys_semtimedop
sys_pread64
sys_pwrite64
sys_truncate64
sys_ftruncate64
sys_readahead
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
struct statfs64 has extra padding with EABI growing its size from 84 to
88. This struct is now __attribute__((packed,aligned(4))) with a small
assembly wrapper to force the sz argument to 84 if it is 88 to avoid
copying the extra padding over user space memory unexpecting it.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
For a while we wanted to change the way syscalls were called on ARM.
Instead of encoding the syscall number in the swi instruction which
requires reading back the instruction from memory to extract that number
and polluting the data cache, it was decided that simply storing the
syscall number into r7 would be more efficient. Since this represents
an ABI change then making that change at the same time as EABI support
is the right thing to do.
It is now expected that EABI user space binaries put the syscall number
into r7 and use "swi 0" to call the kernel. Syscall register argument
are also expected to have "EABI arrangement" i.e. 64-bit arguments
should be put in a pair of registers from an even register number.
Example with long ftruncate64(unsigned int fd, loff_t length):
legacy ABI:
- put fd into r0
- put length into r1-r2
- use "swi #(0x900000 + 194)" to call the kernel
new ARM EABI:
- put fd into r0
- put length into r2-r3 (skipping over r1)
- put 194 into r7
- use "swi 0" to call the kernel
Note that it is important to use 0 for the swi argument as backward
compatibility with legacy ABI user space relies on this.
The syscall macros in asm-arm/unistd.h were also updated to support
both ABIs and implement the right call method automatically.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
The ARM EABI defines new names for GCC helper functions.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
We must make sure that assembly code that modifies the stack pointer
before calling a C function does it so it remains 64-bit aligned.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
The ARM EABI says that the stack pointer has to be 64-bit aligned for
reasons already mentioned in patch #3101 when calling C functions.
We therefore must verify and adjust sp accordingly when taking an
exception from kernel mode since sp might not necessarily be 64-bit
aligned if the exception occurs in the middle of a kernel function.
If the exception occurs while in user mode then no sp fixup is needed as
long as sizeof(struct pt_regs) as well as any additional syscall data
stack space remain multiples of 8.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds register switch support in nommu mode.
Signed-off-by: Hyok S. Choi <hyok.choi@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
This field is redundent since it must be equal to PHYS_OFFSET anyway.
First, let's use PHYS_OFFSET directly instead.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The arm clock semaphores are strict mutexes, convert them to the new
mutex implementation
Signed-off-by: Arjan van de Ven <arjan@infradead.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Catalin Marinas
If the low interrupt latency mode is enabled for the CPU (from ARMv6
onwards), the ldm/stm instructions are no longer atomic. An ldm instruction
restoring the sp and pc registers can be interrupted immediately after sp
was updated but before the pc. If this happens, the CPU restores the base
register to the value before the ldm instruction but if the base register
is not sp, the interrupt routine will corrupt the stack and the restarted
ldm instruction will load garbage.
Note that future ARM cores might always run in the low interrupt latency
mode.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Catalin Marinas
Since ARM1176, the CPU ID format has changed and it will also be used for
future ARM architectures.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
arch: Use <linux/capability.h> where capable() is used.
Signed-off-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Remove various things which were checking for gcc-1.x and gcc-2.x compilers.
From: Adrian Bunk <bunk@stusta.de>
Some documentation updates and removes some code paths for gcc < 3.2.
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Some ARM platforms have the ability to program the interrupt controller to
detect various interrupt edges and/or levels. For some platforms, this is
critical to setup correctly, particularly those which the setting is dependent
on the device.
Currently, ARM drivers do (eg) the following:
err = request_irq(irq, ...);
set_irq_type(irq, IRQT_RISING);
However, if the interrupt has previously been programmed to be level sensitive
(for whatever reason) then this will cause an interrupt storm.
Hence, if we combine set_irq_type() with request_irq(), we can then safely set
the type prior to unmasking the interrupt. The unfortunate problem is that in
order to support this, these flags need to be visible outside of the ARM
architecture - drivers such as smc91x need these flags and they're
cross-architecture.
Finally, the SA_TRIGGER_* flag passed to request_irq() should reflect the
property that the device would like. The IRQ controller code should do its
best to select the most appropriate supported mode.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Patch from Richard Purdie
ARM doesn't use ACPI so ARM's apm implementation has no need to depend
on PM_LEGACY. This patch removes that dependency.
Signed-off-by: Richard Purdie <rpurdie@rpsys.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Since we now only build arch/arm/kernel/dma.c on machine types
which set ISA_DMA_API, we don't need to define MAX_DMA_CHANNELS
to 0 to indicate this - this definition becomes superfluous.
Remove it.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ISA_DMA_API tells the rest of the kernel if the ISA DMA API is
available. Select this symbol only on machine types which make
use of the ISA DMA API.
Make building of arch/arm/kernel/dma.c depend on this symbol -
if a machine does not support the ISA DMA API, it's pointless
building this file.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There's no need to have DMA initialised at the same time as
interrupts. Move it to a core_initcall().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The old __address element in struct scatterlist remained from older
kernels because the ARM DMA emulation code made use of it. Move
this field into struct dma_struct, and convert DMA emulation code
to setup a SG entry as required.
Also, convert DMA emulation code to use the new DMA API rather
than the PCI DMA API.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Allow the compiler to optimise the bus_to_virt(virt_to_bus())
transformation in the ARM ISA DMA interface.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
arch/arm/kernel/entry-armv.S has contained a comment suggesting
that asm/hardware.h and asm/arch/irqs.h should be moved into the
asm/arch/entry-macro.S include. So move the includes to these
two files as required.
Add missing includes (asm/hardware.h, asm/io.h) to asm/arch/system.h
includes which use those facilities, and remove asm/io.h from
kernel/process.c.
Remove other unnecessary includes from arch/arm/kernel, arch/arm/mm
and arch/arm/mach-footbridge.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We are coding the kernel link address into the makefiles, which is
invisibly dependent on PAGE_OFFSET. If PAGE_OFFSET is changed, the
makefiles also need to be changed.
Make adjustments such that the makefiles encode just the offset from
PAGE_OFFSET for the kernel link address, and use PAGE_OFFSET in the
linker scripts directly.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
Strictly speaking, the NPTL kernel helpers are required for pre ARMv6
only. They are available on ARMv6+ as well for obvious compatibility
reasons. However there are cases where extra memory barriers are needed
when using an SMP ARMv6 machine but not on pre-ARMv6.
This patch adds a memory barrier kernel helper that glibc can use as
needed for pre-ARMv6 binaries to be forward compatible with an SMP
kernel on ARMv6, as well as the necessary dmb instructions to the
cmpxchg helper.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Acked-by: Daniel Jacobowitz <dan@codesourcery.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rather than providing more wrappers for 6-arg syscalls, arrange for
them to be supported as standard. This just means that we always
store the 6th argument on the stack, rather than in the wrappers.
This means we eliminate the wrappers for:
* sys_futex
* sys_arm_fadvise64_64
* sys_mbind
* sys_ipc
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Daniel Jacobowitz
Handle new EABI relocations when loading kernel modules. This is
necessary for CONFIG_AEABI kernels, and also for some broken
(since fixed) old ABI toolchains.
Signed-off-by: Daniel Jacobowitz <dan@codesourcery.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nikola Valerjev
Single stepping an application using ptrace() fails over ARM instructions BX and BLX.
Steps to reproduce:
Compile and link the following files
main.c
-----
void foo();
int main() {
foo();
return 0;
}
foo.s
-----
.text
.globl foo
foo:
BX LR
Using ptrace() functionality, run to main(), and start singlestepping.
Singlestep over \"BX LR\" instruction won\'t transfer the control back
to main, but run the code to completion.
This problems seems to be in the function get_branch_address() in
arch/arm/kernel/ptrace.c. The function doesn\'t seem to recognize BX
and BLX instructions as branches. BX and BLX instructions can be used
to convert from ARM to Thumb mode if the target address has the low
bit set. However, they are also perfectly legal in the ARM only mode.
Although other things in the kernel seem to indicate that only ARM
mode is accepted (and not Thumb), many compilers will generate BX
and BLX instructions even when generating ARM only code.
Signed-off-by: Nikola Valerjev <nikola@ghs.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We don't really need to check whether the machine type is Netwinder
or CATS before setting up the PCI IO mapping for debugging. This
allows us to eliminate asm/mach-types.h from head.S
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Daniel Jacobowitz
After delivering a signal (creating its stack frame) we must check for
additional pending unblocked signals before returning to userspace.
Otherwise signals may be delayed past the next syscall or reschedule.
Once that was fixed it became obvious that the ARM signal mask manipulation
was broken. It was a little bit broken before the recent SA_NODEFER
changes, and then very broken after them. We must block the requested
signals before starting the handler or the same signal can be delivered
again before the handler even gets a chance to run.
Signed-off-by: Daniel Jacobowitz <dan@codesourcery.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Unfortunately, later gcc versions error out when our get_user is passed
a const pointer, since we write to a temporary variable declared as
typeof(*(p)) which propagates the const-ness.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Since few people need the support anymore, this moves the legacy
pm_xxx functions to CONFIG_PM_LEGACY, and include/linux/pm_legacy.h.
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Make some changes to the NEED_RESCHED and POLLING_NRFLAG to reduce
confusion, and make their semantics rigid. Improves efficiency of
resched_task and some cpu_idle routines.
* In resched_task:
- TIF_NEED_RESCHED is only cleared with the task's runqueue lock held,
and as we hold it during resched_task, then there is no need for an
atomic test and set there. The only other time this should be set is
when the task's quantum expires, in the timer interrupt - this is
protected against because the rq lock is irq-safe.
- If TIF_NEED_RESCHED is set, then we don't need to do anything. It
won't get unset until the task get's schedule()d off.
- If we are running on the same CPU as the task we resched, then set
TIF_NEED_RESCHED and no further action is required.
- If we are running on another CPU, and TIF_POLLING_NRFLAG is *not* set
after TIF_NEED_RESCHED has been set, then we need to send an IPI.
Using these rules, we are able to remove the test and set operation in
resched_task, and make clear the previously vague semantics of
POLLING_NRFLAG.
* In idle routines:
- Enter cpu_idle with preempt disabled. When the need_resched() condition
becomes true, explicitly call schedule(). This makes things a bit clearer
(IMO), but haven't updated all architectures yet.
- Many do a test and clear of TIF_NEED_RESCHED for some reason. According
to the resched_task rules, this isn't needed (and actually breaks the
assumption that TIF_NEED_RESCHED is only cleared with the runqueue lock
held). So remove that. Generally one less locked memory op when switching
to the idle thread.
- Many idle routines clear TIF_POLLING_NRFLAG, and only set it in the inner
most polling idle loops. The above resched_task semantics allow it to be
set until before the last time need_resched() is checked before going into
a halt requiring interrupt wakeup.
Many idle routines simply never enter such a halt, and so POLLING_NRFLAG
can be always left set, completely eliminating resched IPIs when rescheduling
the idle task.
POLLING_NRFLAG width can be increased, to reduce the chance of resched IPIs.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Con Kolivas <kernel@kolivas.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Run idle threads with preempt disabled.
Also corrected a bugs in arm26's cpu_idle (make it actually call schedule()).
How did it ever work before?
Might fix the CPU hotplugging hang which Nigel Cunningham noted.
We think the bug hits if the idle thread is preempted after checking
need_resched() and before going to sleep, then the CPU offlined.
After calling stop_machine_run, the CPU eventually returns from preemption and
into the idle thread and goes to sleep. The CPU will continue executing
previous idle and have no chance to call play_dead.
By disabling preemption until we are ready to explicitly schedule, this bug is
fixed and the idle threads generally become more robust.
From: alexs <ashepard@u.washington.edu>
PPC build fix
From: Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
MIPS build fix
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Patch from Nicolas Pitre
Noticed by Woody Suwalski <woodys@xandros.com>.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add infrastructure for supporting per-cpu local timers to update
the profiling information and update system time accounting.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
arch/arm/kernel/irq.c:998:26: warning: Using plain integer as NULL pointer
arch/arm/kernel/smp.c:145:25: warning: Using plain integer as NULL pointer
arch/arm/kernel/smp.c:362:5: warning: symbol 'smp_call_function_on_cpu' was not declared. Should it be static?
drivers/video/amba-clcd.c:521:12: warning: symbol 'amba_clcdfb_init' was not declared. Should it be static?
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The sys_ptrace boilerplate code (everything outside the big switch
statement for the arch-specific requests) is shared by most architectures.
This patch moves it to kernel/ptrace.c and leaves the arch-specific code as
arch_ptrace.
Some architectures have a too different ptrace so we have to exclude them.
They continue to keep their implementations. For sh64 I had to add a
sh64_ptrace wrapper because it does some initialization on the first call.
For um I removed an ifdefed SUBARCH_PTRACE_SPECIAL block, but
SUBARCH_PTRACE_SPECIAL isn't defined anywhere in the tree.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Paul Mackerras <paulus@samba.org>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Acked-By: David Howells <dhowells@redhat.com>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
glibc expects to count lines beginning with "processor" to determine
the number of processors, not lines beginning with "Processor". So,
give glibc the format it expects.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We don't want to call dump_cpu_info() from cpu_init() after boot since
it produces a lot of unnecessary noise - since cpu_init() gets called
on resume and hotplug cpu insertion events.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
Since we know the value of cpsr on entry, we can replace the bic+orr with
a single eor. Also remove a possible result delay (at least on XScale).
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>