This migrates the alias computation and printing of probed cache
parameters from the SH-4 code to the shared cpu_cache_init().
This permits other platforms with aliases to make use of the same
probe logic without having to roll their own, and also produces
consistent output regardless of platform.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This provides a central point for CPU cache initialization routines.
This replaces the antiquated p3_cache_init() method, which the vast
majority of CPUs never cared about.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds a family member to struct sh_cpuinfo, which allows us to fall
back more on the probe routines to work out what sort of subtype we are
running on. This will be used by the CPU cache initialization code in
order to first do family-level initialization, followed by subtype-level
optimizations.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
These were previous littered around tlb-nommu.c and pg-nommu.c, though at
this point there are more stubs than are strictly TLB or page op related,
so just consolidate them in a single nommu.c.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This does a bit of reorganizing for allowing nommu to use the new
and generic cache.c, no functional changes.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This builds in the newly created cache.c (renamed from pg-mmu.c) for both
MMU and NOMMU configurations. The kmap_coherent() stubs and alias
information recorded by each CPU family takes care of doing the right
thing while enabling the code to be commonly shared.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in kmap_coherent() for the non-SH4 cases to permit the
pg-mmu.c bits to be used generically across all CPUs. SH-5 is still in
the TODO state, but will move over to fixmap and the generic interface
gradually.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This kills off the ifdef from kmap_coherent_init() and just bails if
there are no cache aliases. This permits the kmap coherent code to be
used on other CPUs.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This only bothers with the TLB entry flush in the case of the initial
page write exception, as it is unecessary in the case of the load/store
exceptions.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds a bit of rework to have the TLB protection violations skip the
TLB miss fastpath and go directly in to do_page_fault(), as these require
slow path handling.
Based on an earlier patch by SUGIOKA Toshinobu.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This optimizes for the cases when a CPU does not yet have a valid ASID
context associated with it, as in this case there is no work for any of
flush_cache_mm()/flush_cache_page()/flush_cache_range() to do. Based on
the the MIPS implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now with all of the prep work out of the way, kill off the SH-5 variants
and use the SH-4 version directly. This also takes advantage of the
unrolling that was previously done for the new version.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in some register alignment helpers for the shared flushers,
allowing them to also be used on SH-5. The main rationale here is that
in the SH-5 case we have a variable ABI, where the pointer size may not
equal the register width. This register extension is taken care of by
the SH-5 code already today, and is otherwise unused on the SH-4 code.
This combines the two and allows us to kill off the SH-5 implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This inserts a ULONG_MAX entry at the end of the valid entries in the
stack trace buffer so the default code doesn't need to scan to the end of
available slots. This also makes the trace buffer termination behaviour
consistent with the other architectures.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This flags the default unwinder as reliable, as it tends to be reliable
enough for the purposes of the stacktrace buffer. We leave the unreliable
cases for the unwind methods that we know to be completely broken.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adopts the reliability checks from the x86 stacktrace code so known
bad addresses are not recorded in the stack trace buffer.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
save_stack_trace_tsk() and friends can be called from atomic context (as
triggered by latencytop), and subsequently hit two problematic allocation
points that were using GFP_KERNEL (these were dwarf_unwind_stack() and
dwarf_frame_alloc_regs()). Convert these over to GFP_ATOMIC and get
latencytop working with the DWARF unwinder.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Trying to figure out the best value for DWARF_ARCH_UNWIND_OFFSET is
tricky at best. Various things can change the size (and offset from the
beginning of the function) of the prologue. Notably, turning on ftrace
adds calls to mcount at the beginning of functions, thereby pushing the
prologue further into the function.
So replace DWARF_ARCH_UNWIND_OFFSET with some code that continues to
execute CFA instructions until the value of return address register is
defined. This is safe to do because we know that the return address must
have been pushed onto the frame before our first function call; we just
can't figure out where at compile-time.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The destination address might be unaligned, so set it with
put_unaligned() for safety. This restores the previous behaviour, albeit
through the proper API.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This was using internal symbols for unaligned accesses, bypassing the
exposed interface for variable sized safe accesses. This converts all of
the __get_unaligned_cpuXX() users over to get_unaligned() directly,
relying on the cast to select the proper internal routine.
Additionally, the __put_unaligned_cpuXX() case is superfluous given that
the destination address is aligned in all of the current cases, so just
drop that outright.
Furthermore, this switches to the asm/unaligned.h header instead of the
asm-generic version, which was silently bypassing the SH-4A optimized
unaligned ops.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Annotate various assembly code paths with CFI assembler directives so
that DWARF unwind info is available for the unwinder.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
In order to use DWARF unwinder info the frame register has to contain a
valid value. Whilst GCC takes care of this for C code, we have to do it
ourselves for assembly.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This is a first cut at a generic DWARF unwinder for the kernel. It's
still lacking DWARF64 support and the DWARF expression support hasn't
been tested very well but it is generating proper stacktraces on SH for
WARN_ON() and NULL dereferences.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Instead of implementing our own stack unwinder via dump_trace() we
should use the new stack unwinder API because it is more modular. This
change allows us to decouple the interface for generating stacktraces
from the implementation of a stack unwinder.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Provide an interface for registering stack unwinders, where each
unwinder is given a rating that describes its accuracy and
complexity. The more accurate an unwinder is, the more complex it is.
If a the current stack unwinder faults, then the stack unwinder with the
next highest accuracy will be used in its place (provided one is
available). For example, this allows unwinders, such as the DWARF
unwinder, to liberally sprinkle BUG()s to catch badly formed DWARF debug
info.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Copy the stacktrace ops code from x86 and provide a central function for
use by functions that need to dump a callstack.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Convert the AP325RXA board code to register devices at
arch_initcall() time instead of device_initcall(). This
fix unbreaks pcf8563 RTC driver support.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Convert the Migo-R board code to register devices at
arch_initcall() time instead of __initcall(). This fix
unbreaks migor_ts touch screen driver support.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Convert the processor platform device setup
functions from __initcall() and sometimes
device_initcall() to arch_initcall().
This makes sure that the platform devices are
registered a bit earlier so the devices are
available when drivers register using initcall
levels earlier than device_initcall().
A good example is platform devices needed by
i2c-sh_mobile.c which registers a bit earlier
using subsys_initcall().
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
commit fd51d251e4
Author: Stefan Raspl <raspl@linux.vnet.ibm.com>
Date: Tue May 19 09:59:08 2009 +0200
blktrace: remove debugfs entries on bad path
added in an explicit invocation of debugfs_remove for bt->dir, in
blk_remove_buf_file_callback we are also getting the directory removed. On
occasion I am seeing memory corruption that I have bisected down to
this commit. [The testing involves a (long) series of I/O benchmarks
with blktrace invoked around the actual runs.] I believe that this
committed patch is correct, but the problem actually lies in the code
in blk_remove_buf_file_callback.
With this patch I am able to consistently get complete runs whereas
previously I could not get a single run to complete.
The first part of the patch simply moves the debugfs_remove below the
relay_close: the relay_close call will remove files under bt->dir, and
so we should not remove the directory until all the files we created
have been removed. (Note: This is not sufficient to fix the problem -
the file system code has ref counts on the directoy, so our invocation
does not cause the directory to actually be removed. Nonetheless, we
should not rely upon that feature.)
Signed-off-by: Alan D. Brunelle <alan.brunelle@hp.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* 'for-linus' of git://oss.sgi.com/xfs/xfs:
xfs: fix spin_is_locked assert on uni-processor builds
xfs: check for dinode realtime flag corruption
use XFS_CORRUPTION_ERROR in xfs_btree_check_sblock
xfs: switch to NOFS allocation under i_lock in xfs_attr_rmtval_get
xfs: switch to NOFS allocation under i_lock in xfs_readlink_bmap
xfs: switch to NOFS allocation under i_lock in xfs_attr_rmtval_set
xfs: switch to NOFS allocation under i_lock in xfs_buf_associate_memory
xfs: switch to NOFS allocation under i_lock in xfs_dir_cilookup_result
xfs: switch to NOFS allocation under i_lock in xfs_da_buf_make
xfs: switch to NOFS allocation under i_lock in xfs_da_state_alloc
xfs: switch to NOFS allocation under i_lock in xfs_getbmap
xfs: avoid memory allocation under m_peraglock in growfs code
* 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-dev:
ahci: add workaround for on-board 5723s on some gigabyte boards
ahci: Soften up the dmesg on SB600 PMP softreset failure recovery
Documentation/kernel-parameters.txt: document libata's ignore_hpa option
sata_nv: MSI support, disabled by default
libata: OCZ Vertex can't do HPA
pata_atiixp: fix second channel support
pata_at91: fix resource release
We can't call nfs_readdata_release()/nfs_writedata_release() without
first initialising and referencing args.context. Doing so inside
nfs_direct_read_schedule_segment()/nfs_direct_write_schedule_segment()
causes an Oops.
We should rather be calling nfs_readdata_free()/nfs_writedata_free() in
those cases.
Looking at the O_DIRECT code, the "struct nfs_direct_req" is already
referencing the nfs_open_context for us. Since the readdata and writedata
structures carry a reference to that, we can simplify things by getting rid
of the extra nfs_open_context references, so that we can replace all
instances of nfs_readdata_release()/nfs_writedata_release().
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Tested-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Some gigabytes have on-board SIMG5723s connected to JMB ahcis. These
are used to implement hardware raid. Unfortunately some firmware
revisions on these 5723s don't bring the link down when all the
downstream ports are unoccupied while not responding to reset protocol
which makes libata think that there's device attached to the port but
is not responding and retry. This results in painfully wrong boot
detection time for these ports when they're empty.
This patch quirks those boards such that ahci gives up after the
initial timeout. Combined with parallel probing, this gives quick
enough probing and also is safe because SIMG5723 will respond to the
first try if any of the downstream ports is occupied.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Marc Bowes <marcbowes@gmail.com>
Reported-by: Nicolas Mailhot <Nicolas.Mailhot@LaPoste.net>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Too strong words led to spurious bug reports: Novell bugzilla #527748,
RedHat bugzilla #468800. This patch is used to soften up the dmesg on
SB600 PMP softreset failure recovery, so as to remove the scariness and
concern from community.
Reported-by: pgnet Dev <pgnet.dev@gmail.com>
Signed-off-by: Shane Huang <shane.huang@amd.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
By default the kernel honors the HPA (host protected area) of hard
drives. Using libata's ignore_hpa module option it's possible to
change this behaviour.
Document usage and options of libata.ignore_hpa in
Documentation/kernel-parameters.txt.
Signed-off-by: Michael Prokop <mika@grml.org>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
At least the nVidia MCP55 controller quite happily supports MSI.
This adds an option to use it. It is disabled by default.
As per feedback by Robert Hancock, it will honour the user
request as the kernel will not enable MSI where the controller
or the specific system configuration do not support it.
Signed-off-by: Tony Vroon <tony@linx.net>
Cc: Robert Hancock <hancockrwd@gmail.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
OCZ Vertex SSD can't do HPA and not in a usual way. It reports HPA,
allows unlocking but then fails all IOs which fall in the unlocked
area. Quirk it so that HPA unlocking is not used for the device.
Reported by Daniel Perup in bnc#522414.
https://bugzilla.novell.com/show_bug.cgi?id=522414
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Daniel Perup <probe@spray.se>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
PIO and MWDMA timings are never programmed for the second channel
because timing registers are treated as 16-bit long ones.
The bug is an attixp -> pata_atiixp regression and goes back to:
commit 669a5db411
Author: Jeff Garzik <jeff@garzik.org>
Date: Tue Aug 29 18:12:40 2006 -0400
[libata] Add a bunch of PATA drivers.
Cc: Krystian Juskowiak <jusko@tlen.pl>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Borislav Petkov <bbpetkov@yahoo.de>
Cc: Robert Hancock <hancockrwd@gmail.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Julias Lawall discovered that pata_at91 wasn't freeing a memory region
allocated with kzalloc() on init failure paths. Upon review,
pata_at91 also seems to be doing unnecessary explicit resource
releases for managed resources too. Convert memory allocation to
managed one and drop unnecessary explicit resource releases.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Julia Lawall <julia@diku.dk>
Cc: Sergey Matyukevich <geomatsi@gmail.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Without SMP or preemption spin_is_locked always returns false,
so we can't do an assert with it. Instead use assert_spin_locked,
which does the right thing on all builds.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Eric Sandeen <sandeen@sandeen.net>
Reported-by: Johannes Engel <jcnengel@googlemail.com>
Tested-by: Johannes Engel <jcnengel@googlemail.com>
Signed-off-by: Felix Blyakher <felixb@sgi.com>