Commit Graph

18295 Commits

Author SHA1 Message Date
Jeremy Fitzhardinge
925596a017 x86: avoid name conflict for Voyager leave_mm
Avoid a conflict between Voyager's leave_mm and asm-x86/mmu.h's leave_mm.

Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:55 +01:00
Paolo Ciarrocchi
7375931a27 x86: coding style fixes in arch/x86/ia32/audit.c
Fix one error reported by checkpatch,
it now reports:

total: 0 errors, 0 warnings, 42 lines checked

Signed-off-by: Paolo Ciarrocchi <paolo.ciarrocchi@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:54 +01:00
Mel Gorman
bac4894dfa x86: make NUMA work on 32-bit again
On 32-bit NUMA, the memmap representing struct pages on each node is
allocated from node-local memory if possible. As only node-0 has memory from
ZONE_NORMAL, the memmap must be mapped into low memory. This is done by
reserving space in the Kernel Virtual Area (KVA) for the memmap belonging
to other nodes by taking pages from the end of ZONE_NORMAL and remapping
the other nodes memmap into those virtual addresses. The node boundaries
are then adjusted so that the region of pages is not used and it is marked
as reserved in the bootmem allocator.

This reserved portion of the KVA is PMD aligned althought
strictly speaking that requirement could be lifted (see thread at
http://lkml.org/lkml/2007/8/24/220). The problem is that when aligned, there
may be a portion of ZONE_NORMAL at the end that is not used for memmap and
does not have an initialised memmap nor is it marked reserved in the bootmem
allocator. Later in the boot process, these pages are freed and a storm of
Bad page state messages result.

This patch marks these pages reserved that are wasted due to alignment
in the bootmem allocator so they are not accidently freed. It is worth
noting that memory from node-0 is wasted where it could have been put into
ZONE_HIGHMEM on NUMA machines. Worse, the KVA is always reserved from the
location of real memory even when there is plenty of spare virtual address
space.

This patch also makes sure that reserve_bootmem() is not called with a
0-length size in numa_kva_reserve().  When this happens, it usually means
that a kernel built for Summit is being booted on a normal machine. The
resulting BUG_ON() is misleading so it is caught here.

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-30 13:32:54 +01:00
Markus Metzger
87e8407f9a x86, ptrace: add bts_struct size to status command
Return the size of bts_struct in the PTRACE_BTS_STATUS command.
Change types to u32.

Signed-off-by: Markus Metzger <markus.t.metzger@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-30 13:32:54 +01:00
Pavel Machek
4fc2fba804 x86: unify arch/x86/kernel/acpi/sleep*.c
Unify arch/x86/kernel/acpi/sleep*.c

Pretty trivial unification; when two functions differed, it was
usually in error handling, and better of the two was picked up.

Signed-off-by: Pavel Machek <pavel@suse.cz>
Looks-okay-to: Rafael J. Wysocki <rjw@sisk.pl>
Tested-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-30 13:32:54 +01:00
Ingo Molnar
8a19da7b7f x86: clean up arch/x86/mm/fault_64.c
clean up arch/x86/mm/fault_64.c a bit.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:53 +01:00
travis@sgi.com
acdac87202 percpu: make the asm-generic/percpu.h more "generic"
- add support for PER_CPU_ATTRIBUTES

- fix generic smp percpu_modcopy to use per_cpu_offset() macro.

Add the ability to use generic/percpu even if the arch needs to override
several aspects of its operations. This will enable the use of generic
percpu.h for all arches.

An arch may define:

__per_cpu_offset	Do not use the generic pointer array. Arch must
			define per_cpu_offset(cpu) (used by x86_64, s390).

__my_cpu_offset		Can be defined to provide an optimized way to determine
			the offset for variables of the currently executing
			processor. Used by ia64, x86_64, x86_32, sparc64, s/390.

SHIFT_PTR(ptr, offset)	If an arch defines it then special handling
			of pointer arithmentic may be implemented. Used
			by s/390.

(Some of these special percpu arch implementations may be later consolidated
so that there are less cases to deal with.)

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Mike Travis <travis@sgi.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-30 13:32:52 +01:00
travis@sgi.com
b32ef636a5 percpu: use a kconfig variable to signal arch specific percpu setup
The use of the __GENERIC_PERCPU is a bit problematic since arches
may want to run their own percpu setup while using the generic
percpu definitions. Replace it through a kconfig variable.

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Mike Travis <travis@sgi.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-30 13:32:51 +01:00
H. Peter Anvin
cf8fa920cb i386: handle an initrd in highmem (version 2)
The boot protocol has until now required that the initrd be located in
lowmem, which makes the lowmem/highmem boundary visible to the boot
loader.  This was exported to the bootloader via a compile-time
field.  Unfortunately, the vmalloc= command-line option breaks this
part of the protocol; instead of adding yet another hack that affects
the bootloader, have the kernel relocate the initrd down below the
lowmem boundary inside the kernel itself.

Note that this does not rely on HIGHMEM being enabled in the kernel.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-30 13:32:51 +01:00
Huang, Ying
6d7d743375 x86 boot : export boot_params via debugfs for debugging
This patch export the boot parameters via debugfs for debugging.

The files added are as follow:

boot_params/data    :  binary file for struct boot_params
boot_params/version :  boot protocol version

This patch is based on 2.6.24-rc5-mm1 and has been tested on i386 and
x86_64 platform.

This patch is based on the Peter Anvin's proposal.

Signed-off-by: Huang Ying <ying.huang@intel.com>

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-30 13:32:51 +01:00
Miguel Boton
4d022e35fd x86: reboot_{32|64}.c unification
reboot_{32|64}.c unification patch.

This patch unifies the code from the reboot_32.c and reboot_64.c files.

It has been tested in computers with X86_32 and X86_64 kernels and it
looks like all reboot modes work fine (EFI restart system hasn't been
tested yet).

Probably I made some mistakes (like I usually do) so I hope
we can identify and fix them soon.

Signed-off-by: Miguel Boton <mboton@gmail.com>

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:51 +01:00
Abhishek Sagar
f315decbd0 x86: kprobes change kprobe_handler flow
Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com>
Signed-off-by: Quentin Barnes <qbarnes@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:50 +01:00
Eric Dumazet
4c4915627f x86: make arch/x86/kernel/acpi/wakeup_32.S use a separate
While examining vmlinux namelist on i386 (nm -v vmlinux) I noticed :

c01021d0 t es7000_rename_gsi
c010221a T es7000_start_cpu
<Big Hole>
c0103000 T thread_saved_pc

and

c0113218 T acpi_restore_state_mem
c0113219 T acpi_save_state_mem
<Big Hole>
c0114000 t wakeup_code

This is because arch/x86/kernel/acpi/wakeup_32.S forces a .text alignment
of 4096 bytes. (I have no idea if it is really needed, since
arch/x86/kernel/acpi/wakeup_64.S uses a 16 bytes alignment *only*)

So arch/x86/kernel/built-in.o also has this alignment

arch/x86/kernel/built-in.o:     file format elf32-i386

Sections:
Idx Name          Size      VMA       LMA       File off  Algn
  0 .text         00018c94  00000000  00000000  00001000  2**12
                  CONTENTS, ALLOC, LOAD, RELOC, READONLY, CODE

But as arch/x86/kernel/acpi/wakeup_32.o is not the first object linked
into arch/x86/kernel/built-in.o, linker had to build several holes to meet
alignement requirements, because of .o nestings in the kbuild process.

This can be solved by using a special section, .text.page_aligned, so that
no holes are needed.

# size vmlinux.before vmlinux.after
   text    data     bss     dec     hex filename
4619942  422838  458752 5501532  53f25c vmlinux.before
4610534  422838  458752 5492124  53cd9c vmlinux.after

This saves 9408 bytes

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:50 +01:00
Andi Kleen
e3cfac84cf x86: mark memory_setup __init
Otherwise

WARNING: vmlinux.o(.text+0x64a9): Section mismatch: reference to .init.text:machine_specific_memory_setup (between 'memory_setup' and 'show_cpuinfo')

Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:49 +01:00
Andi Kleen
a9a832902f x86: Set CFQ as default in 32-bit defconfig
Someone complained that the 32-bit defconfig contains AS as default IO
scheduler. Change that to CFQ.

Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:49 +01:00
Andi Kleen
a6b68076fd x86: compile apm and voyager module only when selected in Kconfig
Previously the complete files were #ifdef'ed, but now handle that in the
Makefile.

May save a minor bit of compilation time.

[ Stephen Rothwell <sfr@canb.auug.org.au>: build dependency fix ]
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:49 +01:00
Andi Kleen
37f30e21d6 x86: document fdimage/isoimage completely in make help
Add missing targets and missing options in x86 make help

[ mingo@elte.hu: more whitespace cleanups ]

Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:49 +01:00
Andi Kleen
3898534d85 x86: remove CPU capabitilites printks on 32-bit
I don't know of any case where they have been useful and they look ugly.

Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:49 +01:00
Jeremy Fitzhardinge
b899c5ed2e x86/efi: fix improper use of lvalue
# HG changeset patch
# User Jeremy Fitzhardinge <jeremy@xensource.com>
# Date 1199391030 28800
# Node ID 5d35c92fdf0e2c52edbb6fc4ccd06c7f65f25009
# Parent  22f6a5902285b58bfc1fbbd9e183498c9017bd78
x86/efi: fix improper use of lvalue

pgd_val is no longer valid as an lvalue, so don't try to assign to it.

Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:44 +01:00
Andreas Herrmann
9566e91d49 x86: fix detection of CONSTANT_TSC bit for AMD CPUs
Commits
 - c52f61fcbdb2aa84f0e4d831ef07f375e6b99b2c
  (x86: allow TSC clock source on AMD Fam10h and some cleanup)
 - e30436f05d456efaff77611e4494f607b14c2782
  (x86: move X86_FEATURE_CONSTANT_TSC into early cpu feature detection)

are supposed to fix the detection of contant TSC for AMD CPUs.
Unfortunately on x86_64 it does still not work with current x86/mm.
For a Phenom I still get:

  ...
  TSC calibrated against PM_TIMER
  Marking TSC unstable due to TSCs unsynchronized
  time.c: Detected 2288.366 MHz processor.
  ...

We have to set c->x86_power in early_identify_cpu to properly detect
the CONSTANT_TSC bit in early_init_amd.

Attached patch fixes this issue. Following the relevant boot
messages when the fix is used:

  ...
  TSC calibrated against PM_TIMER
  time.c: Detected 2288.279 MHz processor.
  ...
  Initializing CPU#1
  ...
  checking TSC synchronization [CPU#0 -> CPU#1]: passed.
  ...
  Initializing CPU#2
  ...
  checking TSC synchronization [CPU#0 -> CPU#2]: passed.
  ...
  Booting processor 3/4 APIC 0x3
  ...
  checking TSC synchronization [CPU#0 -> CPU#3]: passed.
  Brought up 4 CPUs
  ...

Patch is against x86/mm (v2.6.24-rc8-672-ga9f7faa).
Please apply.

Set c->x86_power in early_identify_cpu. This ensures that
X86_FEATURE_CONSTANT_TSC can properly be set in early_init_amd.

Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:41 +01:00
Andi Kleen
32c7553f82 x86: remove explicit C3 TSC check on 64bit
Trust the ACPI code to disable TSC instead when C3 is used.

AMD Fam10h does not disable TSC in any C states so the
check was incorrect there anyways after the change
to handle this like Intel on AMD too.

This allows to use the TSC when C3 is disabled in software
(acpi.max_c_state=2), but the BIOS supports it anyways.

Match i386 behaviour.

Cc: lenb@kernel.org

Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:41 +01:00
Andi Kleen
51fc97b935 x86: allow TSC clock source on AMD Fam10h and some cleanup
After a lot of discussions with AMD it turns out that TSC
on Fam10h CPUs is synchronized when the CONSTANT_TSC cpuid bit is set.
Or rather that if there are ever systems where that is not
true it would be their BIOS' task to disable the bit.

So finally use TSC gettimeofday on Fam10h by default.

Or rather it is always used now on CPUs where the AMD
specific CONSTANT_TSC bit is set.

This gives a nice speed bost for gettimeofday() on these systems
which tends to be by far the most common v/syscall.

On a Fam10h system here TSC gtod uses about 20% of the CPU time of
acpi_pm based gtod(). This was measured on 32bit, on 64bit
it is even better because TSC gtod() can use a vsyscall
and stay in ring 3, which acpi_pm doesn't.

The Intel check simply checks for CONSTANT_TSC too without hardcoding
Intel vendor. This is equivalent on 64bit because all 64bit capable Intel
CPUs will have CONSTANT_TSC set.

On Intel there is no CPU supplied CONSTANT_TSC bit currently,
but we synthesize one based on hardcoded knowledge which steppings
have p-state invariant TSC.

So the new logic is now: On CPUs which have the AMD specific
CONSTANT_TSC bit set or on Intel CPUs which are new enough
to be known to have p-state invariant TSC always use
TSC based gettimeofday()

Cc: lenb@kernel.org

Signed-off-by: Andi Kleen <ak@suse.de>

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:40 +01:00
Andi Kleen
2b16a23538 x86: move X86_FEATURE_CONSTANT_TSC into early cpu feature detection
Need this in the next patch in time_init and that happens early.

This includes a minor fix on i386 where early_intel_workarounds()
[which is now called early_init_intel] really executes early as
the comments say.

Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:40 +01:00
Ingo Molnar
92767af0e3 x86: fix sched_clock()
[ andi@firstfloor.org: build fix ]

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:40 +01:00
Andi Kleen
6d63de8dbc x86: remove get_cycles_sync
rdtsc is now speculation-safe, so no need for the sync variants of
the APIs.

[ mingo@elte.hu: removed the nsec_barrier() complication. ]

Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:39 +01:00
Ingo Molnar
f06e4ec1c1 x86: read_tsc sync
make native_read_tsc() always non-speculative.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:39 +01:00
Ingo Molnar
e402644013 x86: map vsyscalls early enough
map vsyscalls early enough. This is important if a __vsyscall_fn
function is used by other kernel code too.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:39 +01:00
Ingo Molnar
cdc7957d19 x86: move native_read_tsc() offline
move native_read_tsc() offline.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:39 +01:00
Ingo Molnar
6d5f718a49 x86: lfence fix
LFENCE is available on XMM2 or higher Intel CPUs - not XMM or higher...

this caused boot failures on XMM1 & !XMM1 capable CPUs.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:38 +01:00
Andi Kleen
707fa8ed92 x86: Implement support to synchronize RDTSC with LFENCE on Intel CPUs
According to Intel RDTSC can be always synchronized with LFENCE
on all current CPUs. Implement the necessary CPUID bit for that.

It is unclear yet if that is true for all future CPUs too,
but if there's another way the kernel can be always updated.

Cc: asit.k.mallick@intel.com
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:37 +01:00
Andi Kleen
de4218634e x86: implement support to synchronize RDTSC through MFENCE on AMD CPUs
According to AMD RDTSC can be synchronized through MFENCE.
Implement the necessary CPUID bit for that.

Cc: andreas.herrmann3@amd.com
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:37 +01:00
Carlos R. Mafra
3aa88cdf6b x86: clean up k8topology.c
This patch fixes all errors pointed out by checkpatch.pl.

                                      errors   lines of code   errors/KLOC
arch/x86/mm/k8topology_64.c (before)      72             185         389.1
arch/x86/mm/k8topology_64.c (after)        0             185             0

No code changed.

   text    data     bss     dec     hex filename
   1506       0       0    1506     5e2 k8topology_64.o.after
   1506       0       0    1506     5e2 k8topology_64.o.before

md5sum:

   f9f48331a7eca4fc60d2a03369dc5f53  k8topology_64.o.after
   f9f48331a7eca4fc60d2a03369dc5f53  k8topology_64.o.before

Signed-off-by: Carlos R. Mafra <crmafra@gmail.com>

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:36 +01:00
Hiroshi Shimamoto
ff8a03a623 x86: clean up apic_32.c, take 2
More white space and coding style clean up.

Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:36 +01:00
Ingo Molnar
f2633105cd x86: debug: double-check the empty zero page
temporary debugging - remove before this hits v2.6.25.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:36 +01:00
Yinghai Lu
48ddb154cf x86: not clear empty_zero_page again
empty_zero_page is in .bss section, and it is cleared in clear_bss by
x86_64_start_kernel(). So don't clear that again in mem_init

Signed-off-by: Yinghai Lu <yinghai.lu@sun.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:36 +01:00
Hiroshi Shimamoto
e83a5fdca8 x86: clean up apic_32/64.c
White space and coding style clean up.
Make apic_32/64.c similar.

Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:35 +01:00
Harvey Harrison
c4aba4a8ec x86: introduce force_sig_info_fault helper to X86_64
Use the force_sig_info_fault helper from X86_32 in X86_64.

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:35 +01:00
Harvey Harrison
1dc85be087 x86: begin fault_{32|64}.c unification
Move X86_32 only get_segment_eip to X86_64
Move X86_64 only is_errata93 to X86_32

Change X86_32 loop in is_prefetch to highlight the differences
between them.  Fold the logic from __is_prefetch in as well on
X86_32.

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:35 +01:00
Harvey Harrison
b6795e65f1 x86: fault_32.c cleanup
We get die() from kdebug.h, no need for forward declaration.

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:34 +01:00
Carlos R. Mafra
b75f53dba8 x86: fix style errors in nmi_int.c
This patch fixes most errors detected by checkpatch.pl.

                                     errors   lines of code   errors/KLOC
arch/x86/oprofile/nmi_int.c (after)       1             461           2.1
arch/x86/oprofile/nmi_int.c (before)     60             477         125.7

No code changed.

size:
   text    data     bss     dec     hex filename
   2675     264     472    3411     d53 nmi_int.o.after
   2675     264     472    3411     d53 nmi_int.o.before

md5sum:
  847aea0cc68fe1a2b5e7019439f3b4dd  nmi_int.o.after
  847aea0cc68fe1a2b5e7019439f3b4dd  nmi_int.o.before

Signed-off-by: Carlos R. Mafra <crmafra@gmail.com>
Reviewed-by: Jesper Juhl <jesper.juhl@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:33 +01:00
Quentin Barnes
b506a9d08b x86: code clarification patch to Kprobes arch code
When developing the Kprobes arch code for ARM, I ran across some code
found in x86 and s390 Kprobes arch code which I didn't consider as
good as it could be.

Once I figured out what the code was doing, I changed the code
for ARM Kprobes to work the way I felt was more appropriate.
I've tested the code this way in ARM for about a year and would
like to push the same change to the other affected architectures.

The code in question is in kprobe_exceptions_notify() which
does:
====
          /* kprobe_running() needs smp_processor_id() */
          preempt_disable();
          if (kprobe_running() &&
              kprobe_fault_handler(args->regs, args->trapnr))
                  ret = NOTIFY_STOP;
          preempt_enable();
====

For the moment, ignore the code having the preempt_disable()/
preempt_enable() pair in it.

The problem is that kprobe_running() needs to call smp_processor_id()
which will assert if preemption is enabled.  That sanity check by
smp_processor_id() makes perfect sense since calling it with preemption
enabled would return an unreliable result.

But the function kprobe_exceptions_notify() can be called from a
context where preemption could be enabled.  If that happens, the
assertion in smp_processor_id() happens and we're dead.  So what
the original author did (speculation on my part!) is put in the
preempt_disable()/preempt_enable() pair to simply defeat the check.

Once I figured out what was going on, I considered this an
inappropriate approach.  If kprobe_exceptions_notify() is called
from a preemptible context, we can't be in a kprobe processing
context at that time anyways since kprobes requires preemption to
already be disabled, so just check for preemption enabled, and if
so, blow out before ever calling kprobe_running().  I wrote the ARM
kprobe code like this:
====
          /* To be potentially processing a kprobe fault and to
           * trust the result from kprobe_running(), we have
           * be non-preemptible. */
          if (!preemptible() && kprobe_running() &&
              kprobe_fault_handler(args->regs, args->trapnr))
                  ret = NOTIFY_STOP;
====

The above code has been working fine for ARM Kprobes for a year.
So I changed the x86 code (2.6.24-rc6) to be the same way and ran
the Systemtap tests on that kernel.  As on ARM, Systemtap on x86
comes up with the same test results either way, so it's a neutral
external functional change (as expected).

This issue has been discussed previously on linux-arm-kernel and the
Systemtap mailing lists.  Pointers to the by base for the two
discussions:
http://lists.arm.linux.org.uk/lurker/message/20071219.223225.1f5c2a5e.en.html
http://sourceware.org/ml/systemtap/2007-q1/msg00251.html

Signed-off-by: Quentin Barnes <qbarnes@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
Acked-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
2008-01-30 13:32:32 +01:00
Cyrill Gorcunov
3f4380a1e0 x86: get rid of checkpatch.pl complains on apm_32.c
This patch eliminates most of code-style errors
discovered by checkpatch.pl on arch/x86/kernel/apm_32.c

no code changed:

      text    data     bss     dec     hex filename
     12142    1837      84   14063    36ef apm_32.o.before
     12142    1837      84   14063    36ef apm_32.o.after

   md5:
       2676b881ad55e387da4a995e8b9ee372  apm_32.o.before.asm
       2676b881ad55e387da4a995e8b9ee372  apm_32.o.after.asm

Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:32 +01:00
Adrian Bunk
1c858087cd x86: default to PCI=y
PCI is one of the few hardware stuff where defaulting to y makes sense.

Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:32 +01:00
Sam Ravnborg
037f52aca4 x86: gitignore arch/x86/vdso files
Teach git to ignore generated files in
arch/x86/vdso/*

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Cc: Roland McGrath <roland@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:32 +01:00
Cyrill Gorcunov
8eed926053 x86: coding style cleanup for kernel/bootflag.c
This patch eliminates checkpatch.pl complaints on bootflag.c

No code changed:

   text    data     bss     dec     hex filename
    321       8       0     329     149 bootflag.o.before
    321       8       0     329     149 bootflag.o.after

   md5:
      9c1b474bcf25ddc1724a29c19880043f  bootflag.o.before.asm
      9c1b474bcf25ddc1724a29c19880043f  bootflag.o.after.asm

Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:31 +01:00
Ingo Molnar
ff3cf85612 x86: hlt on early crash
H. Peter Anvin <hpa@zytor.com> wrote:

> It probably should actually HLT, to avoid sucking power, and stressing
> the thermal system.  We're dead at this point, and the early 486's
> which had problems with HLT will lock up - we don't care.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:31 +01:00
Nick Piggin
fb0328e2e6 x86: reduce CONFIG_X86_PPRO_FENCE bloat
CONFIG_X86_PPRO_FENCE bloats text:

i386 allmodconf: size mm/built-in.o
           text    data     bss     dec     hex         text ratio
vanilla: 163082   20372   40120  223574   36956          100.00%
bugfix : 163509   20372   40120  224001   36b01            0.26%
noppro : 162191   20372   40120  222683   365db         -  0.55%
both   : 162267   20372   40120  222759   36627         -  0.50% (+0.05% vs noppro)

So with the ppro memory ordering bug out of the way, the PG_uptodate fix
only adds 76 bytes of text.

allow this config to be specified by distros.

[ mingo@elte.hu: x86.git merge ]

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:31 +01:00
Sam Ravnborg
583d0e90ea x86: unify arch/x86/lib/Makefile(s)
Trivial unification of Makefiles for the
x86 specific library part.
Linking order is slightly modified but should be harmless.

Tested doing a defconfig build before and after and saw
no build changes.

It adds almost as many lines as it deletes - bacause
I broke a few lines up fo readability in the Makefile.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:31 +01:00
Sam Ravnborg
db569afa4e x86: unify arch/x86/boot/compressed/Makefile(s)
Trivial unification of the two Makefiles.
Tested doing a defconfig build for both 32 and 64 bit and
no build changes occured.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:30 +01:00
Sam Ravnborg
6b0c3d44d3 x86: unify arch/x86/kernel/Makefile(s)
Combine the 32 and 64 bit specific Makefiles in one file.
While doing so link order was (almost) preserved on 32 bit
but on 64 bit link order changed a lot.

Patch was checked with defconfig + allyesconfig builds.
The same .o files were linked in these configurations.

To keep readability of the Makefiles a few Kconfig
symbols was added/modified and it was checked that
they were not used anywhere else.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:27 +01:00