For pseries IOMMU bypass I want to be able to fall back to the regular
IOMMU ops. Do this by creating a dma_mapping_ops struct, and convert
the others while at it.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Allocate IOMMU tables local to the relevant node.
Signed-off-by: Anton Blanchard <anton@samba.org>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
of_node_to_nid returns -1 if the associativity cannot be found. This
means pcibus_to_cpumask has to be careful not to pass a negative index into
node_to_cpumask.
Since pcibus_to_node could be used a lot, and of_node_to_nid is slow (it
walks a list doing strcmps), lets also cache the node in the
pci_controller struct.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Remove some stale POWER3/POWER4/970 on 32bit kernel support.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Micro-optimisation - add no-minimal-toc to some more arch/powerpc Makefiles.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Forthcoming machines will extend the FPSCR to 64 bits. We already
had a 64-bit save area for the FPSCR, but we need to use a new form
of the mtfsf instruction. Fortunately this new form is decoded as
an ordinary mtfsf by existing 64-bit processors.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
zImage will set /chosen/bootargs (if it is otherwise empty) with the
contents of a buffer in the section "__builtin_cmdline". This permits
tools to edit zImage binaries to set the command-line eventually
processed by vmlinux.
--
Signed-off-by: Michal Ostrowski <mostrows@watson.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Instead of trying to make PPC64 MSI fit in a Intel-centric MSI layer, a
simple short-term solution is to hook the pci_{en/dis}able_msi() calls
and make a machdep call.
The rest of the MSI functions are superfluous for what is needed at this
time. Many of which can have machdep calls added as needed.
Ben and Michael Ellerman are looking into rewrite the MSI layer to be
more generic. However, in the meantime this works as a interim
solution.
Signed-off-by: Jake Moilanen <moilanen@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This adds support to recognize the PCIe device_type "pciex" and made
the portdrv buildable.
Signed-off-by: Jake Moilanen <moilanen@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The push_end macro in arch/powerpc/kernel/pci_32.c uses integer
division and multiplication to achieve the effect of rounding a
resource end address up and then advancing it to the end of a
power-of-2 sized region. This changes it to an equivalent computation
that only needs an integer add and OR. This is partly based on an
earlier patch by Mel Gorman.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Some POWER5+ machines can do 64k hardware pages for normal memory but
not for cache-inhibited pages. This patch lets us use 64k hardware
pages for most user processes on such machines (assuming the kernel
has been configured with CONFIG_PPC_64K_PAGES=y). User processes
start out using 64k pages and get switched to 4k pages if they use any
non-cacheable mappings.
With this, we use 64k pages for the vmalloc region and 4k pages for
the imalloc region. If anything creates a non-cacheable mapping in
the vmalloc region, the vmalloc region will get switched to 4k pages.
I don't know of any driver other than the DRM that would do this,
though, and these machines don't have AGP.
When a region gets switched from 64k pages to 4k pages, we do not have
to clear out all the 64k HPTEs from the hash table immediately. We
use the _PAGE_COMBO bit in the Linux PTE to indicate whether the page
was hashed in as a 64k page or a set of 4k pages. If hash_page is
trying to insert a 4k page for a Linux PTE and it sees that it has
already been inserted as a 64k page, it first invalidates the 64k HPTE
before inserting the 4k HPTE. The hash invalidation routines also use
the _PAGE_COMBO bit, to determine whether to look for a 64k HPTE or a
set of 4k HPTEs to remove. With those two changes, we can tolerate a
mix of 4k and 64k HPTEs in the hash table, and they will all get
removed when the address space is torn down.
Signed-off-by: Paul Mackerras <paulus@samba.org>
The pgdir field in the paca was a leftover from the dynamic VSIDs
patch, and is not used in the current kernel code. This removes it.
Signed-off-by: Paul Mackerras <paulus@samba.org>
People have been reporting that PPP connections over ptys, such as
used with PPTP, will hang randomly when transferring large amounts of
data, for instance in http://bugzilla.kernel.org/show_bug.cgi?id=6530.
I have managed to reproduce the problem, and the patch below fixes the
actual cause.
The problem is not in fact in ppp_async.c but in n_tty.c. What
happens is that when pptp reads from the pty, we call read_chan() in
drivers/char/n_tty.c on the master side of the pty. That copies all
the characters out of its buffer to userspace and then calls
check_unthrottle(), which calls the pty unthrottle routine, which
calls tty_wakeup on the slave side, which calls ppp_asynctty_wakeup,
which calls tasklet_schedule. So far so good. Since we are in
process context, the tasklet runs immediately and calls
ppp_async_process(), which calls ppp_async_push, which calls the
tty->driver->write function to send some more output.
However, tty->driver->write() returns zero, because the master
tty->receive_room is still zero. We haven't returned from
check_unthrottle() yet, and read_chan() only updates tty->receive_room
_after_ calling check_unthrottle. That means that the driver->write
call in ppp_async_process() returns 0. That would be fine if we were
going to get a subsequent wakeup call, but we aren't (we just had it,
and the buffer is now empty).
The solution is for n_tty.c to update tty->receive_room _before_
calling the driver unthrottle routine. The patch below does this.
With this patch I was able to transfer a 900MB file over a PPTP
connection (taking about 25 minutes), whereas without the patch the
connection would always stall in under a minute.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
From: Christoph Lameter <clameter@sgi.com>
Looks like a comma was left from the conversion from a struct to an
assignment.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
According to Intel ICH spec, there are several rules that Base Address
should be programmed before IOSE (PCICMD register ) enabled.
For example ICH7:
12.1.3 SATA : the base address register for the bus master register
should be programmed before this bit is set.
11.1.3: PCICMD (USB): The base address register for USB should be
programmed before this bit is set.
....
To make sure kernel code follow this rule , and prevent unnecessary
confusion. I proposal this patch.
Signed-off-by: Luming Yu <luming.yu@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
At least one laptop blew up on resume from suspend with a black screen due
to a lack of this patch. By only writing back config space that is
different, we minimise the possibility of accidents like this.
Signed-off-by: Dave Jones <davej@redhat.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
We currently don't handle errors properly when resuming a PCI device:
* In pci_default_resume() we capture the error code returned by
pci_enable_device() but don't pass it up to the caller.
Introduced by commit 95a629657d
* In pci_resume_device(), the errors possibly returned by the driver's
.resume method or by the generic pci_default_resume() function are
ignored.
This patch fixes both issues.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Fix build error when CONFIG_ACPI not defined
Signed-off-by: Kristen Carlson Accardi <kristen.c.accardi@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch sets the max_cache_size value required to tune up
scheduler in SMP systems. Otherwise, the calculated
migration_cost is too high and task scheduling may lock up.
Signed-off-by: Krzysztof Helt <krzysztof.h1@wp.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
This adds a vdso_base element to the mm_context_t for 32-bit compiles
(both for ARCH=powerpc and ARCH=ppc). This fixes the compile errors
that have been reported in arch/powerpc/kernel/signal_32.c.
Signed-off-by: Paul Mackerras <paulus@samba.org>
* master.kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6:
[SPARC64]: Avoid JBUS errors on some Niagara systems.
[FUSION]: Fix mptspi.c build with CONFIG_PM not set.
[TG3]: Handle Sun onboard tg3 chips more correctly.
[SPARC64]: Dump local cpu registers in sun4v_log_error()
From: Milton Miller <miltonm@bga.com>
The add_preferred_console call in rtas_console.c was not causing the
console to be selected. It turns out that the add_preferred_console was
being called after the hvc_console driver was registered. It only works
when it is called before the console driver is registered.
Reorder hvc_console.o after the hvc_console drivers to allow the selection
during console_initcall processing.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
From: Markus Lidel <Markus.Lidel@shadowconnect.com>
- Fixed locking of struct i2o_exec_wait in Executive-OSM
- Removed LCT Notify in i2o_exec_probe() which caused freeing memory and
accessing freed memory during first enumeration of I2O devices
- Added missing locking in i2o_exec_lct_notify()
- removed put_device() of I2O controller in i2o_iop_remove() which caused
the controller structure get freed to early
- Fixed size of mempool in i2o_iop_alloc()
- Fixed access to freed memory in i2o_msg_get()
See http://bugzilla.kernel.org/show_bug.cgi?id=6561
Signed-off-by: Markus Lidel <Markus.Lidel@shadowconnect.com>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
From: Andrew Morton <akpm@osdl.org>
Work around the oops reported in
http://bugzilla.kernel.org/show_bug.cgi?id=6478.
Thanks to Ralf Hildebrandt <ralf.hildebrandt@charite.de> for testing and
reporting.
Acked-by: Dave Jones <davej@codemonkey.org.uk>
Cc: "Brown, Len" <len.brown@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
From: David Howells <dhowells@redhat.com>
Apply some alterations to the memory barrier document that I worked out
with Paul McKenney of IBM, plus some of the alterations suggested by Alan
Stern.
The following changes were made:
(*) One of the examples given for what can happen with overlapping memory
barriers was wrong.
(*) The description of general memory barriers said that a general barrier is
a combination of a read barrier and a write barrier. This isn't entirely
true: it implies both, but is more than a combination of both.
(*) The first example in the "SMP Barrier Pairing" section was wrong: the
loads around the read barrier need to touch the memory locations in the
opposite order to the stores around the write barrier.
(*) Added a note to make explicit that the loads should be in reverse order to
the stores.
(*) Adjusted the diagrams in the "Examples Of Memory Barrier Sequences"
section to make them clearer. Added a couple of diagrams to make it more
clear as to how it could go wrong without the barrier.
(*) Added a section on memory speculation.
(*) Dropped any references to memory allocation routines doing memory
barriers. They may do sometimes, but it can't be relied on. This may be
worthy of further documentation later.
(*) Made the fact that a LOCK followed by an UNLOCK should not be considered a
full memory barrier more explicit and gave an example.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Paul E. McKenney <paulmck@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
In commit 8eb6c6e3b9, Christoph Hellwig
made iommu_alloc_coherent able to do node-local allocations, but
unfortunately got the order of the arguments to alloc_pages_node
wrong. This fixes it.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Doing PCI config space accesses to non-present PCI slots
can result in fatal JBUS errors if the PCI config access
hypervisor call is performed on cpus other than the boot
cpu.
PCI config space accesses to present PCI slots works just
fine.
Recursively traverse the OBP device tree under the PCI
controller node and record all present device IDs into
a small hash table.
Avoid the hypervisor call for any PCI config space access
attempt for a device not recorded in the hash table.
Signed-off-by: David S. Miller <davem@davemloft.net>
Get rid of all the SUN_570X logic and instead:
1) Make sure MEMARB_ENABLE is set when we probe the SRAM
for config information. If that is off we will get
timeouts.
2) Always try to sync with the firmware, if there is no
firmware running do not treat it as an error and instead
just report it the first time we notice this condition.
3) If there is no valid SRAM signature, assume the device
is onboard by setting TG3_FLAG_EEPROM_WRITE_PROT.
Update driver version and release date.
With help from Michael Chan and Fabio Massimo Di Nitto.
Signed-off-by: David S. Miller <davem@davemloft.net>
A few cleanups in hvc_rtas.c:
1. Remove unused RTASCONS_PUT_ATTEMPTS
2. Remove unused rtascons_put_delay.
3. Use i as a loop counter like everyone else on earth.
4. Remove pointless variables, eg. x = foo; if (x) return something_else;
5. Whitespace cleanups and formatting.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Currently the hvc_rtas driver is painfully slow to use. Our "benchmark" is
ls -R /etc, which spits out about 27866 characters. The theoretical maximum
speed would be about 2.2 seconds, the current code takes ~50 seconds.
The core of the problem is that sometimes when the tty layer asks us to push
characters the firmware isn't able to handle some or all of them, and so
returns an error. The current code sees this and just returns to the tty code
with the buffer half sent.
The khvcd thread will eventually wake up and try to push more characters, which
will usually work because by then the firmware's had time to make room. But
the khvcd thread only wakes up every 10 milliseconds, which isn't fast enough.
So change the khvcd thread logic so that if there's an incomplete write we
yield() and then immediately try writing again. Doing so makes POLL_QUICK and
POLL_WRITE synonymous, so remove POLL_QUICK.
With this patch our "benchmark" takes ~2.8 seconds.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This gives the ability to control whether alignment exceptions get
fixed up or reported to the process as a SIGBUS, using the existing
PR_SET_UNALIGN and PR_GET_UNALIGN prctls. We do not implement the
option of logging a message on alignment exceptions.
Signed-off-by: Paul Mackerras <paulus@samba.org>
This adds the PowerPC part of the code to allow processes to change
their endian mode via prctl.
This also extends the alignment exception handler to be able to fix up
alignment exceptions that occur in little-endian mode, both for
"PowerPC" little-endian and true little-endian.
We always enter signal handlers in big-endian mode -- the support for
little-endian mode does not amount to the creation of a little-endian
user/kernel ABI. If the signal handler returns, the endian mode is
restored to what it was when the signal was delivered.
We have two new kernel CPU feature bits, one for PPC little-endian and
one for true little-endian. Most of the classic 32-bit processors
support PPC little-endian, and this is reflected in the CPU feature
table. There are two corresponding feature bits reported to userland
in the AT_HWCAP aux vector entry.
This is based on an earlier patch by Anton Blanchard.
Signed-off-by: Paul Mackerras <paulus@samba.org>
This new prctl is intended for changing the execution mode of the
processor, on processors that support both a little-endian mode and a
big-endian mode. It is intended for use by programs such as
instruction set emulators (for example an x86 emulator on PowerPC),
which may find it convenient to use the processor in an alternate
endianness mode when executing translated instructions.
Note that this does not imply the existence of a fully-fledged ABI for
both endiannesses, or of compatibility code for converting system
calls done in the non-native endianness mode. The program is expected
to arrange for all of its system call arguments to be presented in the
native endianness.
Switching between big and little-endian mode will require some care in
constructing the instruction sequence for the switch. Generally the
instructions up to the instruction that invokes the prctl system call
will have to be in the old endianness, and subsequent instructions
will have to be in the new endianness.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
When debugging early kernel crashes that happen after console_init() and
before a proper console driver takes over, we often have to go hack into
udbg.c to prevent it from unregistering so we can "see" what is
happening. This patch adds a kernel command line option "udbg-immortal"
instead to avoid having to modify the kernel.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
POWER6 moves some of the MMCRA bits and also requires some bits to be
cleared each PMU interrupt.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Acked-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Make sure dma_alloc_coherent allocates memory from the local node. This
is important on Cell where we avoid going through the slow cpu
interconnect.
Note: I could only test this patch on Cell, it should be verified on
some pseries machine by those that have the hardware.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
On 64bit powerpc we can find out what node a pci bus hangs off, so
implement the topology.h macros that export this information.
For 32bit this seems a little more difficult, but I don't know of 32bit
powerpc NUMA machines either, so let's leave it out for now.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This patch attempts to handle RTAS "busy" return codes in a more simple
and consistent manner. Typical callers of RTAS shouldn't have to
manage wait times and delay calls.
This patch also changes the kernel to use msleep() rather than udelay()
when a runtime delay is necessary. This will avoid CPU soft lockups
for extended delay conditions.
Signed-off-by: John Rose <johnrose@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
From: Andrew Morton <akpm@osdl.org>
arch/powerpc/Kconfig:339:warning: leading whitespace ignored
arch/powerpc/Kconfig:347:warning: leading whitespace ignored
arch/powerpc/Kconfig:357:warning: leading whitespace ignored
arch/powerpc/Kconfig:373:warning: leading whitespace ignored
arch/powerpc/Kconfig:382:warning: leading whitespace ignored
arch/powerpc/Kconfig:394:warning: leading whitespace ignored
arch/powerpc/Kconfig:842:warning: leading whitespace ignored
arch/powerpc/Kconfig:847:warning: leading whitespace ignored
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The 970MP cputable entry needs a num_pmcs entry for oprofile to work.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
My js20 appears to lack the ibm,#dma- properties, and boot fails with a
"Kernel panic - not syncing: iommu_init_table: Can't allocate 0 bytes"
message.
This adds a fallback to the "#address-cells" property in case the
"#ibm,dma-address-cells" property is missing. Tested on js20 and
power5 lpar.
Unless there is a more elegant solution... :-)
Signed-off-by: Will Schmidt <willschm@us.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Our MMU hash management code would not set the "C" bit (changed bit) in
the hardware PTE when updating a RO PTE into a RW PTE. That would cause
the hardware to possibly to a write back to the hash table to set it on
the first store access, which in addition to being a performance issue,
might also hit a bug when running with native hash management (non-HV)
as our code is specifically optimized for the case where no write back
happens.
Thus there is a very small therocial window were a hash PTE can become
corrupted if that HPTE has just been upgraded to read write, a store
access happens on it, and that races with another processor evicting
that same slot. Since eviction (caused by an almost full hash) is
extremely rare, the bug is very unlikely to happen fortunately.
This fixes by allowing the updating of the protection bits in the native
hash handling to also set (but not clear) the "C" bit, and, in order to
also improve performances in the general case, by always setting that
bit on newly inserted hash PTE so that writeback really never happens.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>