Currently, highmem is selectable, and you can request an increased
vmalloc area. However, none of this has any effect on the memory
layout since a patch in the highmem series was accidentally dropped.
Moreover, even if you did want highmem, all memory would still be
registered as lowmem, possibly resulting in overflow of the available
virtual mapping space.
The highmem boundary is determined by the highest allowed beginning
of the vmalloc area, which depends on its configurable minimum size
(see commit 60296c71f6 for details on
this).
We should create mappings and initialize bootmem only for low memory,
while the zone allocator must still be told about highmem.
Currently, memory nodes which are completely located in high memory
are not supported. This is not a huge limitation since systems
relying on highmem support are unlikely to have discontiguous memory
with large holes.
[ A similar patch was meant to be merged before commit 5f0fbf9eca
and be available in Linux v2.6.30, however some git rebase screw-up
of mine dropped the first commit of the series, and that goofage
escaped testing somehow as well. -- Nico ]
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Reviewed-by: Nicolas Pitre <nico@marvell.com>
mm: Pass virtual address to [__]p{te,ud,md}_free_tlb()
Upcoming paches to support the new 64-bit "BookE" powerpc architecture
will need to have the virtual address corresponding to PTE page when
freeing it, due to the way the HW table walker works.
Basically, the TLB can be loaded with "large" pages that cover the whole
virtual space (well, sort-of, half of it actually) represented by a PTE
page, and which contain an "indirect" bit indicating that this TLB entry
RPN points to an array of PTEs from which the TLB can then create direct
entries. Thus, in order to invalidate those when PTE pages are deleted,
we need the virtual address to pass to tlbilx or tlbivax instructions.
The old trick of sticking it somewhere in the PTE page struct page sucks
too much, the address is almost readily available in all call sites and
almost everybody implemets these as macros, so we may as well add the
argument everywhere. I added it to the pmd and pud variants for consistency.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: David Howells <dhowells@redhat.com> [MN10300 & FRV]
Acked-by: Nick Piggin <npiggin@suse.de>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [s390]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
fix the following 'make includecheck' warning:
arch/arm/include/asm/atomic.h: asm/system.h is included more than once.
Signed-off-by: Jaswinder Singh Rajput <jaswinderrajput@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Pull the initial preempt_count value into a single
definition site.
Maintainers for: alpha, ia64 and m68k, please have a look,
your arch code is funny.
The header magic is a bit odd, but similar to the KERNEL_DS
one, CPP waits with expanding these macros until the
INIT_THREAD_INFO macro itself is expanded, which is in
arch/*/kernel/init_task.c where we've already included
sched.h so we're good.
Cc: tony.luck@intel.com
Cc: rth@twiddle.net
Cc: geert@linux-m68k.org
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Document the layout of our swp PTE entries, adding definitions for
the bit masks/shifts/sizes, and implement MAX_SWAPFILES_CHECK()
such that we fail to build if we are unable to properly encode the
swp type field.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Update the link script for ARM to use PAGE_SIZE instead of hard-
coded 4096. Also the old RODATA macro is deprecated
for the RO_DATA(PAGE_SIZE) macro. As a consequence the PAGE_SIZE
was changed from (1UL << PAGE_SHIFT) to (_AC(1,UL) << PAGE_SHIFT)
because the linker does not understand the "UL" suffix to numeric
constants.
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This function was only used by pci_claim_resource(), and the last commit
deleted that use.
Signed-off-by: Matthew Wilcox <willy@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (417 commits)
MAINTAINERS: EB110ATX is not ebsa110
MAINTAINERS: update Eric Miao's email address and status
fb: add support of LCD display controller on pxa168/910 (base layer)
[ARM] 5552/1: ep93xx get_uart_rate(): use EP93XX_SYSCON_PWRCNT and EP93XX_SYSCON_PWRCN
[ARM] pxa/sharpsl_pm: zaurus needs generic pxa suspend/resume routines
[ARM] 5544/1: Trust PrimeCell resource sizes
[ARM] pxa/sharpsl_pm: cleanup of gpio-related code.
[ARM] pxa/sharpsl_pm: drop set_irq_type calls
[ARM] pxa/sharpsl_pm: merge pxa-specific code into generic one
[ARM] pxa/sharpsl_pm: merge the two sharpsl_pm.c since it's now pxa specific
[ARM] sa1100: remove unused collie_pm.c
[ARM] pxa: fix the conflicting non-static declarations of global_gpios[]
[ARM] 5550/1: Add default configure file for w90p910 platform
[ARM] 5549/1: Add clock api for w90p910 platform.
[ARM] 5548/1: Add gpio api for w90p910 platform
[ARM] 5551/1: Add multi-function pin api for w90p910 platform.
[ARM] Make ARM_VIC_NR depend on ARM_VIC
[ARM] 5546/1: ARM PL022 SSP/SPI driver v3
ARM: OMAP4: SMP: Update defconfig for OMAP4430
ARM: OMAP4: SMP: Enable SMP support for OMAP4430
...
Without this, the default implementation is a no op which is completely
wrong with a VIVT cache, and usage of sg_copy_buffer() produces
unpredictable results.
Tested-by: Sebastian Andrzej Siewior <bigeasy@breakpoint.cc>
CC: stable@kernel.org
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch removes unused asm/suspend.h files for
the following architectures:
alpha, arm, ia64, m68k, mips, s390, um
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
The current asm-generic/page.h only contains the get_order
function, and asm-generic/uaccess.h only implements
unaligned accesses. This renames the file to getorder.h
and uaccess-unaligned.h to make room for new page.h
and uaccess.h file that will be usable by all simple
(e.g. nommu) architectures.
Signed-off-by: Remis Lima Baima <remis.developer@googlemail.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
The existing asm-generic/atomic.h only defines the
atomic_long type. This renames it to atomic-long.h
so we have a place to add a truly generic atomic.h
that can be used on all non-SMP systems.
Signed-off-by: Remis Lima Baima <remis.developer@googlemail.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Ingo Molnar <mingo@elte.hu>
This provides a reliable way for asm-generic/types.h and other
files to find out if it is running on a 32 or 64 bit platform.
We cannot use CONFIG_64BIT for this in headers that are included
from user space because CONFIG symbols are not available there.
We also cannot do it inside of asm/types.h because some headers
need the word size but cannot include types.h.
The solution is to introduce a new header <asm/bitsperlong.h>
that defines both __BITS_PER_LONG for user space and
BITS_PER_LONG for usage in the kernel. The asm-generic
version falls back to 32 bit unless the architecture overrides
it, which I did for all 64 bit platforms.
Signed-off-by: Remis Lima Baima <remis.developer@googlemail.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
The existing asm-generic versions are incomplete and included
by some architectures. New architectures should be able
to use a generic version, so rename the existing files and
change all users, which lets us add the new files.
Signed-off-by: Remis Lima Baima <remis.developer@googlemail.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
collie_pm was the only non-PXA user of sharpsl_pm. Now as it's gone we
can merge code into one single file to allow further cleanup.
Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
Signed-off-by: Eric Miao <eric.miao@marvell.com>
Always creating the physical mapping should do no harm, so let's remove
the interface that was provided for its optional creation and make the
mapping static.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Define ARCH_KMALLOC_MINALIGN in asm/cache.h
At the request of Russell also move ARCH_SLAB_MINALIGN to this file.
Signed-off-by: Martin Fuzzey <mfuzzey@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Starting with ARMv6, the CPUs support the BE-8 variant of big-endian
(byte-invariant). This patch adds the core support:
- setting of the BE-8 mode via the CPSR.E register for both kernel and
user threads
- big-endian page table walking
- REV used to rotate instructions read from memory during fault
processing as they are still little-endian format
- Kconfig and Makefile support for BE-8. The --be8 option must be passed
to the final linking stage to convert the instructions to
little-endian
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
If a process is interrupted during an If-Then block and a signal is
invoked, the ITSTATE bits must be cleared otherwise the handler would
not run correctly.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Joseph S. Myers <joseph@codesourcery.com>
ARMv7 SMP hardware can handle the TLB maintenance operations
broadcasting in hardware so that the software can avoid the costly IPIs.
This patch adds the necessary checks (the MMFR3 CPUID register) to avoid
the broadcasting if already supported by the hardware.
(this patch is based on the work done by Tony Thompson @ ARM)
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This is a RealView platform supporting core tiles with ARM11MPCore,
Cortex-A8 or Cortex-A9 (multicore) processors. It has support for MMC,
CompactFlash, PCI-E.
Signed-off-by: Colin Tuckley <colin.tuckley@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This allows for optional alternative implementations of __copy_to_user
and __clear_user, with a possible runtime fallback to the standard
version when the alternative provides no gain over that standard
version. This is done by making the standard __copy_to_user into a weak
alias for the symbol __copy_to_user_std. Same thing for __clear_user.
Those two functions are particularly good candidates to have alternative
implementations for, since they rely on the STRT instruction which has
lower performances than STM instructions on some CPU cores such as
the ARM1176 and Marvell Feroceon.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
* master.kernel.org:/home/rmk/linux-2.6-arm:
[ARM] update mach-types
[ARM] Add cmpxchg support for ARMv6+ systems (v5)
[ARM] barriers: improve xchg, bitops and atomic SMP barriers
Gemini: Fix SRAM/ROM location after memory swap
MAINTAINER: Add F: entries for Gemini and FA526
[ARM] disable NX support for OABI-supporting kernels
[ARM] add coherent DMA mask for mv643xx_eth
[ARM] pxa/palm: fix PalmLD/T5/TX AC97 MFP
[ARM] pxa: add parameter to clksrc_read() for pxa168/910
[ARM] pxa: fix the incorrectly defined drive strength macros for pxa{168,910}
[ARM] Orion: Remove explicit name for platform device resources
[ARM] Kirkwood: Correct MPP for SATA activity/presence LEDs of QNAP TS-119/TS-219.
[ARM] pxa/ezx: fix pin configuration for low power mode
[ARM] pxa/spitz: provide spitz_ohci_exit() that unregisters USB_HOST GPIO
[ARM] pxa: enable GPIO receivers after configuring pins
[ARM] pxa: allow gpio_reset drive high during normal work
[ARM] pxa: save/restore PGSR on suspend/resume.
The flat loader uses an architecture's flat_stack_align() to align the
stack but assumes word-alignment is enough for the data sections.
However, on the Xtensa S6000 we have registers up to 128bit width
which can be used from userspace and therefor need userspace stack and
data-section alignment of at least this size.
This patch drops flat_stack_align() and uses the same alignment that
is required for slab caches, ARCH_SLAB_MINALIGN, or wordsize if it's
not defined by the architecture.
It also fixes m32r which was obviously kaput, aligning an
uninitialized stack entry instead of the stack pointer.
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Oskar Schirmer <os@emlix.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Bryan Wu <cooloney@kernel.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Cc: Greg Ungerer <gerg@uclinux.org>
Signed-off-by: Johannes Weiner <jw@emlix.com>
Acked-by: Mike Frysinger <vapier.adi@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add cmpxchg/cmpxchg64 support for ARMv6K and ARMv7 systems
(original patch from Catalin Marinas <catalin.marinas@arm.com>)
The cmpxchg and cmpxchg64 functions can be implemented using the
LDREX*/STREX* instructions. Since operand lengths other than 32bit are
required, the full implementations are only available if the ARMv6K
extensions are present (for the LDREXB, LDREXH and LDREXD instructions).
For ARMv6, only 32-bits cmpxchg is available.
Mathieu :
Make cmpxchg_local always available with best implementation for all type sizes (1, 2, 4 bytes).
Make cmpxchg64_local always available.
Use "Ir" constraint for "old" operand, like atomic.h atomic_cmpxchg does.
Change since v3 :
- Add "memory" clobbers (thanks to Nicolas Pitre)
- removed __asmeq(), only needed for old compilers, very unlikely on ARMv6+.
Note : ARMv7-M should eventually be ifdefed-out of cmpxchg64. But it's not
supported by the Linux kernel currently.
Put back arm < v6 cmpxchg support.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Mathieu Desnoyers pointed out that the ARM barriers were lacking:
- cmpxchg, xchg and atomic add return need memory barriers on
architectures which can reorder the relative order in which memory
read/writes can be seen between CPUs, which seems to include recent
ARM architectures. Those barriers are currently missing on ARM.
- test_and_xxx_bit were missing SMP barriers.
So put these barriers in. Provide separate atomic_add/atomic_sub
operations which do not require barriers.
Reported-Reviewed-and-Acked-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch provides a device drivers, which has a omap iommu, with
address mapping APIs between device virtual address(iommu), physical
address and MPU virtual address.
There are 4 possible patterns for iommu virtual address(iova/da) mapping.
|iova/ mapping iommu_ page
| da pa va (d)-(p)-(v) function type
---------------------------------------------------------------------------
1 | c c c 1 - 1 - 1 _kmap() / _kunmap() s
2 | c c,a c 1 - 1 - 1 _kmalloc()/ _kfree() s
3 | c d c 1 - n - 1 _vmap() / _vunmap() s
4 | c d,a c 1 - n - 1 _vmalloc()/ _vfree() n*
'iova': device iommu virtual address
'da': alias of 'iova'
'pa': physical address
'va': mpu virtual address
'c': contiguous memory area
'd': dicontiguous memory area
'a': anonymous memory allocation
'()': optional feature
'n': a normal page(4KB) size is used.
's': multiple iommu superpage(16MB, 1MB, 64KB, 4KB) size is used.
'*': not yet, but feasible.
Signed-off-by: Hiroshi DOYU <Hiroshi.DOYU@nokia.com>
Add support for the DMA blocks in the S3C64XX series of CPUS,
which are based on the ARM PL080 PrimeCell system.
Unfortunately, these DMA controllers diverge from the PL080
design by adding another DMA controller register and
configuration for OneNAND.
Signed-off-by: Ben Dooks <ben@simtec.co.uk>
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
The SCU can be used by non-realview platforms, so make it visible
for other people to use rather than having them copy the header file.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The ARM SMP code wasn't properly updated for the cpumask changes, which
results in smp_timer_broadcast() broadcasting ticks to non-online CPUs.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
From: Bruce Ashfield <bruce.ashfield@windriver.com>
To fully support the armv7-a instruction set/optimizations, support
for the R_ARM_MOVW_ABS_NC and R_ARM_MOVT_ABS relocation types is
required.
The MOVW and MOVT are both load-immediate instructions, MOVW loads 16
bits into the bottom half of a register, and MOVT loads 16 bits into the
top half of a register.
The relocation information for these instructions has a full 32 bit
value, plus an addend which is stored in the 16 immediate bits in the
instruction itself. The immediate bits in the instruction are not
contiguous (the register # splits it into a 4 bit and 12 bit value),
so the addend has to be extracted accordingly and added to the value.
The value is then split and put into the instruction; a MOVW uses the
bottom 16 bits of the value, and a MOVT uses the top 16 bits.
Signed-off-by: David Borman <david.borman@windriver.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>