This migrates the alias computation and printing of probed cache
parameters from the SH-4 code to the shared cpu_cache_init().
This permits other platforms with aliases to make use of the same
probe logic without having to roll their own, and also produces
consistent output regardless of platform.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This provides a central point for CPU cache initialization routines.
This replaces the antiquated p3_cache_init() method, which the vast
majority of CPUs never cared about.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
These were previous littered around tlb-nommu.c and pg-nommu.c, though at
this point there are more stubs than are strictly TLB or page op related,
so just consolidate them in a single nommu.c.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This does a bit of reorganizing for allowing nommu to use the new
and generic cache.c, no functional changes.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This builds in the newly created cache.c (renamed from pg-mmu.c) for both
MMU and NOMMU configurations. The kmap_coherent() stubs and alias
information recorded by each CPU family takes care of doing the right
thing while enabling the code to be commonly shared.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in kmap_coherent() for the non-SH4 cases to permit the
pg-mmu.c bits to be used generically across all CPUs. SH-5 is still in
the TODO state, but will move over to fixmap and the generic interface
gradually.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This kills off the ifdef from kmap_coherent_init() and just bails if
there are no cache aliases. This permits the kmap coherent code to be
used on other CPUs.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This only bothers with the TLB entry flush in the case of the initial
page write exception, as it is unecessary in the case of the load/store
exceptions.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds a bit of rework to have the TLB protection violations skip the
TLB miss fastpath and go directly in to do_page_fault(), as these require
slow path handling.
Based on an earlier patch by SUGIOKA Toshinobu.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This optimizes for the cases when a CPU does not yet have a valid ASID
context associated with it, as in this case there is no work for any of
flush_cache_mm()/flush_cache_page()/flush_cache_range() to do. Based on
the the MIPS implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now with all of the prep work out of the way, kill off the SH-5 variants
and use the SH-4 version directly. This also takes advantage of the
unrolling that was previously done for the new version.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in some register alignment helpers for the shared flushers,
allowing them to also be used on SH-5. The main rationale here is that
in the SH-5 case we have a variable ABI, where the pointer size may not
equal the register width. This register extension is taken care of by
the SH-5 code already today, and is otherwise unused on the SH-4 code.
This combines the two and allows us to kill off the SH-5 implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This does a bit of unrolling for the SH-4 region flushers.
Based on an earlier patch by SUGIOKA Toshinobu.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This splits out the SH-4 __flush_xxx_region() functions and defines them
as weak symbols. This allows us to provide optimized versions without
having to ifdef cache-sh4.c to death.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This consolidates all of the NEFF-based sign extension for SH-5.
In the future the other SH code will need to make use of this as well,
so make it generic in preparation for more 32/64 consolidation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This provides a __flush_anon_page() that handles both the aliasing and
non-aliasing cases. This fixes up some crashes with heavy
get_user_pages() users.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in a BUG_ON() in kmap_coherent() for PG_dcache_dirty pages
to catch when things go horribly wrong. Copied from the MIPS
implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The TLB miss fast-path presently calls in to update_mmu_cache() to
set up the entry, and does so with a NULL vma. Check for vma validity
in the __update_tlb() ptrace checks.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This splits out a separate __update_cache()/__update_tlb() for
update_mmu_cache() to wrap in to. This lets us share the common
__update_cache() bits while keeping special __update_tlb() handling
broken out.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now that the SH-4 page clear/copy ops are generic, they can be used for
all platforms with CONFIG_MMU=y. SH-5 remains the odd one out, but it too
will gradually be converted over to using this interface.
SH-3 platforms which do not contain aliases will see no impact from this
change, while aliasing SH-3 platforms will get the same interface as
SH-4.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This wires up clear_user_highpage() on SH-4 and subsequently converts the
SH7705 32kB cache mode over to using it. Now that the SH-4 implementation
handles all of the dcache purging directly in the aliasing case, there is
no need to do this in the default clear_page() implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This inverts the delayed dcache flush a bit to be more in line with other
platforms. At the same time this also gives us the ability to do some
more optimizations and cleanup. Now that the update_mmu_cache() callsite
only tests for the bit, the implementation can gradually be split out and
made generic, rather than relying on special implementations for each of
the peculiar CPU types.
SH7705 in 32kB mode and SH-4 still need slightly different handling, but
this is something that can remain isolated in the varying page copy/clear
routines. On top of that, SH-X3 is dcache coherent, so there is no need
to bother with any of these tests in the PTEAEX version of
update_mmu_cache(), so we kill that off too.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The last commit changed the behaviour on kernel faults when we were
doing something other than syncing the page tables. vmalloc_sync_one()
needs to return NULL if the page tables are up to date, because the
reason for the fault was not a missing/inconsitent page table entry. By
returning NULL if the page tables are sync'd we signal to the calling
function that further work must be done to resolve this fault.
Also, remove the superfluous __va() around the first argument to
vmalloc_sync_one(). The value of pgd_k is already a virtual address and
using it wth __va() causes a NULL dereference.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* Remove smp_lock.h from files which don't need it (including some headers!)
* Add smp_lock.h to files which do need it
* Make smp_lock.h include conditional in hardirq.h
It's needed only for one kernel_locked() usage which is under CONFIG_PREEMPT
This will make hardirq.h inclusion cheaper for every PREEMPT=n config
(which includes allmodconfig/allyesconfig, BTW)
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This rewrites the vmalloc fault handling as per x86, which subsequently
allows for easy future tie-in for vmalloc_sync_all().
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Like the UP case, use lmb as the foundation of memory resource
management on NUMA.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds page fault instrumentation for the software performance
counters. Follows the x86 and powerpc changes.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
set_pte_phys() presently uses the global flush_tlb_one(), which locks on
SMP trying to do the IPI. As we have not even initialized the other CPUs
at this point, switch to the local_ variant so the flush happens on the
boot CPU.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This allows the callers to now pass down the full set of FAULT_FLAG_xyz
flags to handle_mm_fault(). All callers have been (mechanically)
converted to the new calling convention, there's almost certainly room
for architectures to clean up their code and then add FAULT_FLAG_RETRY
when that support is added.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
arch/sh has a couple of stray markers without any users introduced
in commit 3d58695edb. Remove them in
preparation of removing the markers in favour of the TRACE_EVENT
macro (and also because we don't keep dead code around).
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This kills off after_bootmem and switches to using slab_is_available()
instead. Presently the only place this is used is by the sh64 ioremap,
and there's not much point in keeping the reference around otherwise.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Several platforms want to be able to do large physically contiguous
allocations (primarily nommu and video codecs on SH-Mobile), provide a
MAX_ORDER override for those cases.
Tested-by: Conrad Parker <conrad@metadecks.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Consolidate these in a single place in the Kconfig menus. At the same
time, disable their interactivity and set them according to the board
config defaults.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently this is special-cased for early initialization. While there are
situations where these static early initializations are still necessary,
with minor changes it is possible to use this for the regular ioremap
implementation as well. This allows us to kill off the special-casing for
the remap completely and to start tidying up all of the SH-5
special-casing in drivers.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently shm_align_mask is only looked at for the bottom up case, but we
still want this for proper colouring constraints in the topdown case.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Not all PCI channels have non-translatable memory windows, this is a
special property of the on-chip PCIC with its 0xfd00... mapping, handle
this explicitly.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch changes the code to use __is_pci_memory() instead of
is_pci_memaddr(). __is_pci_memory() loops through all the pci
channels on the system to match memory windows.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This prevents the DMA API debugging from running out of entries right
away on boot. Defines 4096 entries by default, which while a bit on the
heavy side, ought to leave enough breathing room for some time.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Forcing direct-mapped worked on certain older 2-way set associative
parts, but was always error prone on 4-way parts. As these are the
norm these days, there is not much point in continuing to support this
mode. Most of the folks that used direct-mapped mode generally just
wanted writethrough caching in the first place..
Signed-off-by: Paul Mundt <lethal@linux-sh.org>