Commit Graph

14 Commits

Author SHA1 Message Date
Michael Ellerman
4c55130b2a ppc64 iSeries: Update create_pte_mapping to replace iSeries_bolt_kernel()
early_setup() calls htab_initialize() which is similar, but not identical
to iSeries_bolt_kernel().

On iSeries the Hypervisor has already inserted some ptes for us, and we
simply have to detect that and bolt them. iSeries_hpte_bolt_or_insert()
implements that logic.

For the case of a non-existing pte we just call iSeries_hpte_insert(). This
appears to work, although it's not entirely equivalent to the old code in
iSeries_make_pte() which panicked if we got a secondary slot. Not sure if
that's important.

Finally we call iSeries_hpte_bolt_or_insert() from create_pte_mapping(),
which is called from htab_initialize() for each lmb region.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
2005-09-23 14:47:58 +10:00
Olof Johansson
637a6ff6ce [PATCH] ppc64: Updated Olof misc updates 3/3
Replace some of the hard-coded constants with PAGE_SIZE/SHIFT/ORDER where
appropriate.

Likewise, in a couple of places it doesn't make sense to base some
allocations on page size when all that's required is a constant 4K,
etc.

Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-09-21 19:21:07 +10:00
Kumar Gala
5f7c690728 [PATCH] powerpc: Merged ppc_asm.h
Merged ppc_asm.h between ppc32 & ppc64.  The majority of the file is
common between the two architectures excluding how a single GPR is
saved/restored and which GPRs are non-volatile.

Additionally, moved the ASM_CONST macro used on ppc64 into ppc_asm.h.

Signed-off-by: Kumar Gala <kumar.gala@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-09-19 09:38:49 +10:00
David Gibson
14b3466161 [PATCH] Invert sense of SLB class bit
Currently, we set the class bit in kernel SLB entries, and clear it on
user SLB entries.  On POWER5, ERAT entries created in real mode have
the class bit clear.  So to avoid flushing kernel ERAT entries on each
context switch, this patch inverts our usage of the class bit, setting
it on user SLB entries and clearing it on kernel SLB entries.

Booted on POWER5 and G5.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-09-06 16:57:46 +10:00
David Gibson
c594adad56 [PATCH] Dynamic hugepage addresses for ppc64
Paulus, I think this is now a reasonable candidate for the post-2.6.13
queue.

Relax address restrictions for hugepages on ppc64

Presently, 64-bit applications on ppc64 may only use hugepages in the
address region from 1-1.5T.  Furthermore, if hugepages are enabled in
the kernel config, they may only use hugepages and never normal pages
in this area.  This patch relaxes this restriction, allowing any
address to be used with hugepages, but with a 1TB granularity.  That
is if you map a hugepage anywhere in the region 1TB-2TB, that entire
area will be reserved exclusively for hugepages for the remainder of
the process's lifetime.  This works analagously to hugepages in 32-bit
applications, where hugepages can be mapped anywhere, but with 256MB
(mmu segment) granularity.

This patch applies on top of the four level pagetable patch
(http://patchwork.ozlabs.org/linuxppc64/patch?id=1936).

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-08-29 10:53:38 +10:00
David Gibson
c59c464a3e [PATCH] Change address of ppc64 initial segment table
On ppc64 machines with segment tables, CPU0's segment table is at a
fixed address, currently 0x9000.  This patch moves it to the free
space at 0x6000, just below the fwnmi data area.  This saves 8k of
space in vmlinux and the runtime kernel image.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-08-29 10:53:33 +10:00
David Gibson
e28f7faf05 [PATCH] Four level pagetables for ppc64
Implement 4-level pagetables for ppc64

This patch implements full four-level page tables for ppc64, thereby
extending the usable user address range to 44 bits (16T).

The patch uses a full page for the tables at the bottom and top level,
and a quarter page for the intermediate levels.  It uses full 64-bit
pointers at every level, thus also increasing the addressable range of
physical memory.  This patch also tweaks the VSID allocation to allow
matching range for user addresses (this halves the number of available
contexts) and adds some #if and BUILD_BUG sanity checks.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-08-29 10:53:31 +10:00
David Gibson
488f84994c [PATCH] ppc64: remove another fixed address constraint
Presently the LparMap, one of the structures the kernel shares with the
legacy iSeries hypervisor has a fixed offset address in head.S.  This patch
changes this so the LparMap is a normally initialized structure, without
fixed address.  This allows us to use macros to compute some of the values
in the structure, which wasn't previously possible because the assembler
always uses signed-% which gets the wrong answers for the computations in
question.

Unfortunately, a gcc bug means that doing this requires another structure
(hvReleaseData) to be initialized in asm instead of C, but on the whole the
result is cleaner than before.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-27 16:25:58 -07:00
David Gibson
533f08172e [PATCH] ppc64: dynamically allocate segment tables
PPC64 machines before Power4 need a segment table page allocated for each
CPU.  Currently these are allocated statically in a big array in head.S for
all CPUs.  The segment tables need to be in the first segment (so
do_stab_bolted doesn't take a recursive fault on the stab itself), but
other than that there are no constraints which require the stabs for the
secondary CPUs to be statically allocated.

This patch allocates segment tables dynamically during boot, using
lmb_alloc() to ensure they are within the first 256M segment.  This reduces
the kernel image size by 192k...

Tested on RS64 iSeries, POWER3 pSeries, and POWER5.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-27 16:25:58 -07:00
David Gibson
96e2844999 [PATCH] ppc64: kill bitfields in ppc64 hash code
This patch removes the use of bitfield types from the ppc64 hash table
manipulation code.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-13 11:25:25 -07:00
R Sharada
f4c82d5132 [PATCH] ppc64 kexec: native hash clear
Add code to clear the hash table and invalidate the tlb for native (SMP,
non-LPAR) mode.  Supports 16M and 4k pages.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: R Sharada <sharada@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25 16:24:51 -07:00
Arnd Bergmann
fef1c772fa [PATCH] ppc64: add BPA platform type
This adds the basic support for running on BPA machines.
So far, this is only the IBM workstation, and it will
not run on others without a little more generalization.

It should be possible to configure a kernel for any
combination of CONFIG_PPC_BPA with any of the other
multiplatform targets.

Signed-off-by: Arnd Bergmann <arndb@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-23 09:43:37 +10:00
David Gibson
1f8d419e29 [PATCH] ppc64: pgtable.h and other header cleanups
This patch started as simply removing a few never-used macros from
asm-ppc64/pgtable.h, then kind of grew.  It now makes a bunch of
cleanups to the ppc64 low-level header files (with corresponding
changes to .c files where necessary) such as:
	- Abolishing never-used macros
	- Eliminating multiple #defines with the same purpose
	- Removing pointless macros (cases where just expanding the
macro everywhere turns out clearer and more sensible)
	- Removing some cases where macros which could be defined in
terms of each other weren't
	- Moving imalloc() related definitions from pgtable.h to their
own header file (imalloc.h)
	- Re-arranging headers to group things more logically
	- Moving all VSID allocation related things to mmu.h, instead
of being split between mmu.h and mmu_context.h
	- Removing some reserved space for flags from the PMD - we're
not using it.
	- Fix some bugs which broke compile with STRICT_MM_TYPECHECKS.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-05-05 16:36:32 -07:00
Linus Torvalds
1da177e4c3 Linux-2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!
2005-04-16 15:20:36 -07:00