2005-04-16 18:20:36 -04:00
|
|
|
/*
|
|
|
|
* mm/mprotect.c
|
|
|
|
*
|
|
|
|
* (C) Copyright 1994 Linus Torvalds
|
|
|
|
* (C) Copyright 2002 Christoph Hellwig
|
|
|
|
*
|
2009-01-05 09:06:29 -05:00
|
|
|
* Address space accounting code <alan@lxorguk.ukuu.org.uk>
|
2005-04-16 18:20:36 -04:00
|
|
|
* (C) Copyright 2002 Red Hat Inc, All Rights Reserved
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/hugetlb.h>
|
|
|
|
#include <linux/shm.h>
|
|
|
|
#include <linux/mman.h>
|
|
|
|
#include <linux/fs.h>
|
|
|
|
#include <linux/highmem.h>
|
|
|
|
#include <linux/security.h>
|
|
|
|
#include <linux/mempolicy.h>
|
|
|
|
#include <linux/personality.h>
|
|
|
|
#include <linux/syscalls.h>
|
[PATCH] Swapless page migration: add R/W migration entries
Implement read/write migration ptes
We take the upper two swapfiles for the two types of migration ptes and define
a series of macros in swapops.h.
The VM is modified to handle the migration entries. migration entries can
only be encountered when the page they are pointing to is locked. This limits
the number of places one has to fix. We also check in copy_pte_range and in
mprotect_pte_range() for migration ptes.
We check for migration ptes in do_swap_cache and call a function that will
then wait on the page lock. This allows us to effectively stop all accesses
to apge.
Migration entries are created by try_to_unmap if called for migration and
removed by local functions in migrate.c
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration (I've no NUMA, just
hacking it up to migrate recklessly while running load), I've hit the
BUG_ON(!PageLocked(p)) in migration_entry_to_page.
This comes from an orphaned migration entry, unrelated to the current
correctly locked migration, but hit by remove_anon_migration_ptes as it
checks an address in each vma of the anon_vma list.
Such an orphan may be left behind if an earlier migration raced with fork:
copy_one_pte can duplicate a migration entry from parent to child, after
remove_anon_migration_ptes has checked the child vma, but before it has
removed it from the parent vma. (If the process were later to fault on this
orphaned entry, it would hit the same BUG from migration_entry_wait.)
This could be fixed by locking anon_vma in copy_one_pte, but we'd rather
not. There's no such problem with file pages, because vma_prio_tree_add
adds child vma after parent vma, and the page table locking at each end is
enough to serialize. Follow that example with anon_vma: add new vmas to the
tail instead of the head.
(There's no corresponding problem when inserting migration entries,
because a missed pte will leave the page count and mapcount high, which is
allowed for. And there's no corresponding problem when migrating via swap,
because a leftover swap entry will be correctly faulted. But the swapless
method has no refcounting of its entries.)
From: Ingo Molnar <mingo@elte.hu>
pte_unmap_unlock() takes the pte pointer as an argument.
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration, gcc has tried to exec
a pointer instead of a string: smells like COW mappings are not being
properly write-protected on fork.
The protection in copy_one_pte looks very convincing, until at last you
realize that the second arg to make_migration_entry is a boolean "write",
and SWP_MIGRATION_READ is 30.
Anyway, it's better done like in change_pte_range, using
is_write_migration_entry and make_migration_entry_read.
From: Hugh Dickins <hugh@veritas.com>
Remove unnecessary obfuscation from sys_swapon's range check on swap type,
which blew up causing memory corruption once swapless migration made
MAX_SWAPFILES no longer 2 ^ MAX_SWAPFILES_SHIFT.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Christoph Lameter <clameter@engr.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
From: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23 05:03:35 -04:00
|
|
|
#include <linux/swap.h>
|
|
|
|
#include <linux/swapops.h>
|
mmu-notifiers: core
With KVM/GFP/XPMEM there isn't just the primary CPU MMU pointing to pages.
There are secondary MMUs (with secondary sptes and secondary tlbs) too.
sptes in the kvm case are shadow pagetables, but when I say spte in
mmu-notifier context, I mean "secondary pte". In GRU case there's no
actual secondary pte and there's only a secondary tlb because the GRU
secondary MMU has no knowledge about sptes and every secondary tlb miss
event in the MMU always generates a page fault that has to be resolved by
the CPU (this is not the case of KVM where the a secondary tlb miss will
walk sptes in hardware and it will refill the secondary tlb transparently
to software if the corresponding spte is present). The same way
zap_page_range has to invalidate the pte before freeing the page, the spte
(and secondary tlb) must also be invalidated before any page is freed and
reused.
Currently we take a page_count pin on every page mapped by sptes, but that
means the pages can't be swapped whenever they're mapped by any spte
because they're part of the guest working set. Furthermore a spte unmap
event can immediately lead to a page to be freed when the pin is released
(so requiring the same complex and relatively slow tlb_gather smp safe
logic we have in zap_page_range and that can be avoided completely if the
spte unmap event doesn't require an unpin of the page previously mapped in
the secondary MMU).
The mmu notifiers allow kvm/GRU/XPMEM to attach to the tsk->mm and know
when the VM is swapping or freeing or doing anything on the primary MMU so
that the secondary MMU code can drop sptes before the pages are freed,
avoiding all page pinning and allowing 100% reliable swapping of guest
physical address space. Furthermore it avoids the code that teardown the
mappings of the secondary MMU, to implement a logic like tlb_gather in
zap_page_range that would require many IPI to flush other cpu tlbs, for
each fixed number of spte unmapped.
To make an example: if what happens on the primary MMU is a protection
downgrade (from writeable to wrprotect) the secondary MMU mappings will be
invalidated, and the next secondary-mmu-page-fault will call
get_user_pages and trigger a do_wp_page through get_user_pages if it
called get_user_pages with write=1, and it'll re-establishing an updated
spte or secondary-tlb-mapping on the copied page. Or it will setup a
readonly spte or readonly tlb mapping if it's a guest-read, if it calls
get_user_pages with write=0. This is just an example.
This allows to map any page pointed by any pte (and in turn visible in the
primary CPU MMU), into a secondary MMU (be it a pure tlb like GRU, or an
full MMU with both sptes and secondary-tlb like the shadow-pagetable layer
with kvm), or a remote DMA in software like XPMEM (hence needing of
schedule in XPMEM code to send the invalidate to the remote node, while no
need to schedule in kvm/gru as it's an immediate event like invalidating
primary-mmu pte).
At least for KVM without this patch it's impossible to swap guests
reliably. And having this feature and removing the page pin allows
several other optimizations that simplify life considerably.
Dependencies:
1) mm_take_all_locks() to register the mmu notifier when the whole VM
isn't doing anything with "mm". This allows mmu notifier users to keep
track if the VM is in the middle of the invalidate_range_begin/end
critical section with an atomic counter incraese in range_begin and
decreased in range_end. No secondary MMU page fault is allowed to map
any spte or secondary tlb reference, while the VM is in the middle of
range_begin/end as any page returned by get_user_pages in that critical
section could later immediately be freed without any further
->invalidate_page notification (invalidate_range_begin/end works on
ranges and ->invalidate_page isn't called immediately before freeing
the page). To stop all page freeing and pagetable overwrites the
mmap_sem must be taken in write mode and all other anon_vma/i_mmap
locks must be taken too.
2) It'd be a waste to add branches in the VM if nobody could possibly
run KVM/GRU/XPMEM on the kernel, so mmu notifiers will only enabled if
CONFIG_KVM=m/y. In the current kernel kvm won't yet take advantage of
mmu notifiers, but this already allows to compile a KVM external module
against a kernel with mmu notifiers enabled and from the next pull from
kvm.git we'll start using them. And GRU/XPMEM will also be able to
continue the development by enabling KVM=m in their config, until they
submit all GRU/XPMEM GPLv2 code to the mainline kernel. Then they can
also enable MMU_NOTIFIERS in the same way KVM does it (even if KVM=n).
This guarantees nobody selects MMU_NOTIFIER=y if KVM and GRU and XPMEM
are all =n.
The mmu_notifier_register call can fail because mm_take_all_locks may be
interrupted by a signal and return -EINTR. Because mmu_notifier_reigster
is used when a driver startup, a failure can be gracefully handled. Here
an example of the change applied to kvm to register the mmu notifiers.
Usually when a driver startups other allocations are required anyway and
-ENOMEM failure paths exists already.
struct kvm *kvm_arch_create_vm(void)
{
struct kvm *kvm = kzalloc(sizeof(struct kvm), GFP_KERNEL);
+ int err;
if (!kvm)
return ERR_PTR(-ENOMEM);
INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
+ kvm->arch.mmu_notifier.ops = &kvm_mmu_notifier_ops;
+ err = mmu_notifier_register(&kvm->arch.mmu_notifier, current->mm);
+ if (err) {
+ kfree(kvm);
+ return ERR_PTR(err);
+ }
+
return kvm;
}
mmu_notifier_unregister returns void and it's reliable.
The patch also adds a few needed but missing includes that would prevent
kernel to compile after these changes on non-x86 archs (x86 didn't need
them by luck).
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: fix mm/filemap_xip.c build]
[akpm@linux-foundation.org: fix mm/mmu_notifier.c build]
Signed-off-by: Andrea Arcangeli <andrea@qumranet.com>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Jack Steiner <steiner@sgi.com>
Cc: Robin Holt <holt@sgi.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Kanoj Sarcar <kanojsarcar@yahoo.com>
Cc: Roland Dreier <rdreier@cisco.com>
Cc: Steve Wise <swise@opengridcomputing.com>
Cc: Avi Kivity <avi@qumranet.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Anthony Liguori <aliguori@us.ibm.com>
Cc: Chris Wright <chrisw@redhat.com>
Cc: Marcelo Tosatti <marcelo@kvack.org>
Cc: Eric Dumazet <dada1@cosmosbay.com>
Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Cc: Izik Eidus <izike@qumranet.com>
Cc: Anthony Liguori <aliguori@us.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-28 18:46:29 -04:00
|
|
|
#include <linux/mmu_notifier.h>
|
2009-01-06 17:39:16 -05:00
|
|
|
#include <linux/migrate.h>
|
perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 06:02:48 -04:00
|
|
|
#include <linux/perf_event.h>
|
2014-01-21 18:51:02 -05:00
|
|
|
#include <linux/ksm.h>
|
2005-04-16 18:20:36 -04:00
|
|
|
#include <asm/uaccess.h>
|
|
|
|
#include <asm/pgtable.h>
|
|
|
|
#include <asm/cacheflush.h>
|
|
|
|
#include <asm/tlbflush.h>
|
|
|
|
|
2014-04-07 18:36:56 -04:00
|
|
|
/*
|
|
|
|
* For a prot_numa update we only hold mmap_sem for read so there is a
|
|
|
|
* potential race with faulting where a pmd was temporarily none. This
|
|
|
|
* function checks for a transhuge pmd under the appropriate lock. It
|
|
|
|
* returns a pte if it was successfully locked or NULL if it raced with
|
|
|
|
* a transhuge insertion.
|
|
|
|
*/
|
|
|
|
static pte_t *lock_pte_protection(struct vm_area_struct *vma, pmd_t *pmd,
|
|
|
|
unsigned long addr, int prot_numa, spinlock_t **ptl)
|
|
|
|
{
|
|
|
|
pte_t *pte;
|
|
|
|
spinlock_t *pmdl;
|
|
|
|
|
|
|
|
/* !prot_numa is protected by mmap_sem held for write */
|
|
|
|
if (!prot_numa)
|
|
|
|
return pte_offset_map_lock(vma->vm_mm, pmd, addr, ptl);
|
|
|
|
|
|
|
|
pmdl = pmd_lock(vma->vm_mm, pmd);
|
|
|
|
if (unlikely(pmd_trans_huge(*pmd) || pmd_none(*pmd))) {
|
|
|
|
spin_unlock(pmdl);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, ptl);
|
|
|
|
spin_unlock(pmdl);
|
|
|
|
return pte;
|
|
|
|
}
|
|
|
|
|
2012-10-25 08:16:32 -04:00
|
|
|
static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
|
2006-09-26 02:30:59 -04:00
|
|
|
unsigned long addr, unsigned long end, pgprot_t newprot,
|
2013-10-07 06:29:25 -04:00
|
|
|
int dirty_accountable, int prot_numa)
|
2005-04-16 18:20:36 -04:00
|
|
|
{
|
2012-10-25 08:16:32 -04:00
|
|
|
struct mm_struct *mm = vma->vm_mm;
|
[PATCH] Swapless page migration: add R/W migration entries
Implement read/write migration ptes
We take the upper two swapfiles for the two types of migration ptes and define
a series of macros in swapops.h.
The VM is modified to handle the migration entries. migration entries can
only be encountered when the page they are pointing to is locked. This limits
the number of places one has to fix. We also check in copy_pte_range and in
mprotect_pte_range() for migration ptes.
We check for migration ptes in do_swap_cache and call a function that will
then wait on the page lock. This allows us to effectively stop all accesses
to apge.
Migration entries are created by try_to_unmap if called for migration and
removed by local functions in migrate.c
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration (I've no NUMA, just
hacking it up to migrate recklessly while running load), I've hit the
BUG_ON(!PageLocked(p)) in migration_entry_to_page.
This comes from an orphaned migration entry, unrelated to the current
correctly locked migration, but hit by remove_anon_migration_ptes as it
checks an address in each vma of the anon_vma list.
Such an orphan may be left behind if an earlier migration raced with fork:
copy_one_pte can duplicate a migration entry from parent to child, after
remove_anon_migration_ptes has checked the child vma, but before it has
removed it from the parent vma. (If the process were later to fault on this
orphaned entry, it would hit the same BUG from migration_entry_wait.)
This could be fixed by locking anon_vma in copy_one_pte, but we'd rather
not. There's no such problem with file pages, because vma_prio_tree_add
adds child vma after parent vma, and the page table locking at each end is
enough to serialize. Follow that example with anon_vma: add new vmas to the
tail instead of the head.
(There's no corresponding problem when inserting migration entries,
because a missed pte will leave the page count and mapcount high, which is
allowed for. And there's no corresponding problem when migrating via swap,
because a leftover swap entry will be correctly faulted. But the swapless
method has no refcounting of its entries.)
From: Ingo Molnar <mingo@elte.hu>
pte_unmap_unlock() takes the pte pointer as an argument.
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration, gcc has tried to exec
a pointer instead of a string: smells like COW mappings are not being
properly write-protected on fork.
The protection in copy_one_pte looks very convincing, until at last you
realize that the second arg to make_migration_entry is a boolean "write",
and SWP_MIGRATION_READ is 30.
Anyway, it's better done like in change_pte_range, using
is_write_migration_entry and make_migration_entry_read.
From: Hugh Dickins <hugh@veritas.com>
Remove unnecessary obfuscation from sys_swapon's range check on swap type,
which blew up causing memory corruption once swapless migration made
MAX_SWAPFILES no longer 2 ^ MAX_SWAPFILES_SHIFT.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Christoph Lameter <clameter@engr.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
From: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23 05:03:35 -04:00
|
|
|
pte_t *pte, oldpte;
|
2005-10-29 21:16:27 -04:00
|
|
|
spinlock_t *ptl;
|
2012-11-18 21:14:23 -05:00
|
|
|
unsigned long pages = 0;
|
2005-04-16 18:20:36 -04:00
|
|
|
|
2014-04-07 18:36:56 -04:00
|
|
|
pte = lock_pte_protection(vma, pmd, addr, prot_numa, &ptl);
|
|
|
|
if (!pte)
|
|
|
|
return 0;
|
|
|
|
|
2006-10-01 02:29:33 -04:00
|
|
|
arch_enter_lazy_mmu_mode();
|
2005-04-16 18:20:36 -04:00
|
|
|
do {
|
[PATCH] Swapless page migration: add R/W migration entries
Implement read/write migration ptes
We take the upper two swapfiles for the two types of migration ptes and define
a series of macros in swapops.h.
The VM is modified to handle the migration entries. migration entries can
only be encountered when the page they are pointing to is locked. This limits
the number of places one has to fix. We also check in copy_pte_range and in
mprotect_pte_range() for migration ptes.
We check for migration ptes in do_swap_cache and call a function that will
then wait on the page lock. This allows us to effectively stop all accesses
to apge.
Migration entries are created by try_to_unmap if called for migration and
removed by local functions in migrate.c
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration (I've no NUMA, just
hacking it up to migrate recklessly while running load), I've hit the
BUG_ON(!PageLocked(p)) in migration_entry_to_page.
This comes from an orphaned migration entry, unrelated to the current
correctly locked migration, but hit by remove_anon_migration_ptes as it
checks an address in each vma of the anon_vma list.
Such an orphan may be left behind if an earlier migration raced with fork:
copy_one_pte can duplicate a migration entry from parent to child, after
remove_anon_migration_ptes has checked the child vma, but before it has
removed it from the parent vma. (If the process were later to fault on this
orphaned entry, it would hit the same BUG from migration_entry_wait.)
This could be fixed by locking anon_vma in copy_one_pte, but we'd rather
not. There's no such problem with file pages, because vma_prio_tree_add
adds child vma after parent vma, and the page table locking at each end is
enough to serialize. Follow that example with anon_vma: add new vmas to the
tail instead of the head.
(There's no corresponding problem when inserting migration entries,
because a missed pte will leave the page count and mapcount high, which is
allowed for. And there's no corresponding problem when migrating via swap,
because a leftover swap entry will be correctly faulted. But the swapless
method has no refcounting of its entries.)
From: Ingo Molnar <mingo@elte.hu>
pte_unmap_unlock() takes the pte pointer as an argument.
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration, gcc has tried to exec
a pointer instead of a string: smells like COW mappings are not being
properly write-protected on fork.
The protection in copy_one_pte looks very convincing, until at last you
realize that the second arg to make_migration_entry is a boolean "write",
and SWP_MIGRATION_READ is 30.
Anyway, it's better done like in change_pte_range, using
is_write_migration_entry and make_migration_entry_read.
From: Hugh Dickins <hugh@veritas.com>
Remove unnecessary obfuscation from sys_swapon's range check on swap type,
which blew up causing memory corruption once swapless migration made
MAX_SWAPFILES no longer 2 ^ MAX_SWAPFILES_SHIFT.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Christoph Lameter <clameter@engr.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
From: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23 05:03:35 -04:00
|
|
|
oldpte = *pte;
|
|
|
|
if (pte_present(oldpte)) {
|
2005-04-16 18:20:36 -04:00
|
|
|
pte_t ptent;
|
2012-10-25 08:16:32 -04:00
|
|
|
bool updated = false;
|
2005-04-16 18:20:36 -04:00
|
|
|
|
2012-10-25 08:16:32 -04:00
|
|
|
if (!prot_numa) {
|
2013-12-18 20:08:37 -05:00
|
|
|
ptent = ptep_modify_prot_start(mm, addr, pte);
|
2013-12-18 20:08:41 -05:00
|
|
|
if (pte_numa(ptent))
|
|
|
|
ptent = pte_mknonnuma(ptent);
|
2012-10-25 08:16:32 -04:00
|
|
|
ptent = pte_modify(ptent, newprot);
|
2014-02-11 22:43:37 -05:00
|
|
|
/*
|
|
|
|
* Avoid taking write faults for pages we
|
|
|
|
* know to be dirty.
|
|
|
|
*/
|
mm: softdirty: enable write notifications on VMAs after VM_SOFTDIRTY cleared
For VMAs that don't want write notifications, PTEs created for read faults
have their write bit set. If the read fault happens after VM_SOFTDIRTY is
cleared, then the PTE's softdirty bit will remain clear after subsequent
writes.
Here's a simple code snippet to demonstrate the bug:
char* m = mmap(NULL, getpagesize(), PROT_READ | PROT_WRITE,
MAP_ANONYMOUS | MAP_SHARED, -1, 0);
system("echo 4 > /proc/$PPID/clear_refs"); /* clear VM_SOFTDIRTY */
assert(*m == '\0'); /* new PTE allows write access */
assert(!soft_dirty(x));
*m = 'x'; /* should dirty the page */
assert(soft_dirty(x)); /* fails */
With this patch, write notifications are enabled when VM_SOFTDIRTY is
cleared. Furthermore, to avoid unnecessary faults, write notifications
are disabled when VM_SOFTDIRTY is set.
As a side effect of enabling and disabling write notifications with
care, this patch fixes a bug in mprotect where vm_page_prot bits set by
drivers were zapped on mprotect. An analogous bug was fixed in mmap by
commit c9d0bf241451 ("mm: uncached vma support with writenotify").
Signed-off-by: Peter Feiner <pfeiner@google.com>
Reported-by: Peter Feiner <pfeiner@google.com>
Suggested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@parallels.com>
Cc: Jamie Liu <jamieliu@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-10-13 18:55:46 -04:00
|
|
|
if (dirty_accountable && pte_dirty(ptent) &&
|
|
|
|
(pte_soft_dirty(ptent) ||
|
|
|
|
!(vma->vm_flags & VM_SOFTDIRTY)))
|
2014-02-11 22:43:37 -05:00
|
|
|
ptent = pte_mkwrite(ptent);
|
|
|
|
ptep_modify_prot_commit(mm, addr, pte, ptent);
|
2012-10-25 08:16:32 -04:00
|
|
|
updated = true;
|
|
|
|
} else {
|
|
|
|
struct page *page;
|
|
|
|
|
|
|
|
page = vm_normal_page(vma, addr, oldpte);
|
2014-01-21 18:51:02 -05:00
|
|
|
if (page && !PageKsm(page)) {
|
2013-10-07 06:29:05 -04:00
|
|
|
if (!pte_numa(oldpte)) {
|
2014-02-11 22:43:38 -05:00
|
|
|
ptep_set_numa(mm, addr, pte);
|
2012-10-25 08:16:32 -04:00
|
|
|
updated = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (updated)
|
|
|
|
pages++;
|
2012-03-21 19:33:59 -04:00
|
|
|
} else if (IS_ENABLED(CONFIG_MIGRATION) && !pte_file(oldpte)) {
|
[PATCH] Swapless page migration: add R/W migration entries
Implement read/write migration ptes
We take the upper two swapfiles for the two types of migration ptes and define
a series of macros in swapops.h.
The VM is modified to handle the migration entries. migration entries can
only be encountered when the page they are pointing to is locked. This limits
the number of places one has to fix. We also check in copy_pte_range and in
mprotect_pte_range() for migration ptes.
We check for migration ptes in do_swap_cache and call a function that will
then wait on the page lock. This allows us to effectively stop all accesses
to apge.
Migration entries are created by try_to_unmap if called for migration and
removed by local functions in migrate.c
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration (I've no NUMA, just
hacking it up to migrate recklessly while running load), I've hit the
BUG_ON(!PageLocked(p)) in migration_entry_to_page.
This comes from an orphaned migration entry, unrelated to the current
correctly locked migration, but hit by remove_anon_migration_ptes as it
checks an address in each vma of the anon_vma list.
Such an orphan may be left behind if an earlier migration raced with fork:
copy_one_pte can duplicate a migration entry from parent to child, after
remove_anon_migration_ptes has checked the child vma, but before it has
removed it from the parent vma. (If the process were later to fault on this
orphaned entry, it would hit the same BUG from migration_entry_wait.)
This could be fixed by locking anon_vma in copy_one_pte, but we'd rather
not. There's no such problem with file pages, because vma_prio_tree_add
adds child vma after parent vma, and the page table locking at each end is
enough to serialize. Follow that example with anon_vma: add new vmas to the
tail instead of the head.
(There's no corresponding problem when inserting migration entries,
because a missed pte will leave the page count and mapcount high, which is
allowed for. And there's no corresponding problem when migrating via swap,
because a leftover swap entry will be correctly faulted. But the swapless
method has no refcounting of its entries.)
From: Ingo Molnar <mingo@elte.hu>
pte_unmap_unlock() takes the pte pointer as an argument.
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration, gcc has tried to exec
a pointer instead of a string: smells like COW mappings are not being
properly write-protected on fork.
The protection in copy_one_pte looks very convincing, until at last you
realize that the second arg to make_migration_entry is a boolean "write",
and SWP_MIGRATION_READ is 30.
Anyway, it's better done like in change_pte_range, using
is_write_migration_entry and make_migration_entry_read.
From: Hugh Dickins <hugh@veritas.com>
Remove unnecessary obfuscation from sys_swapon's range check on swap type,
which blew up causing memory corruption once swapless migration made
MAX_SWAPFILES no longer 2 ^ MAX_SWAPFILES_SHIFT.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Christoph Lameter <clameter@engr.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
From: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23 05:03:35 -04:00
|
|
|
swp_entry_t entry = pte_to_swp_entry(oldpte);
|
|
|
|
|
|
|
|
if (is_write_migration_entry(entry)) {
|
2013-10-16 16:46:51 -04:00
|
|
|
pte_t newpte;
|
[PATCH] Swapless page migration: add R/W migration entries
Implement read/write migration ptes
We take the upper two swapfiles for the two types of migration ptes and define
a series of macros in swapops.h.
The VM is modified to handle the migration entries. migration entries can
only be encountered when the page they are pointing to is locked. This limits
the number of places one has to fix. We also check in copy_pte_range and in
mprotect_pte_range() for migration ptes.
We check for migration ptes in do_swap_cache and call a function that will
then wait on the page lock. This allows us to effectively stop all accesses
to apge.
Migration entries are created by try_to_unmap if called for migration and
removed by local functions in migrate.c
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration (I've no NUMA, just
hacking it up to migrate recklessly while running load), I've hit the
BUG_ON(!PageLocked(p)) in migration_entry_to_page.
This comes from an orphaned migration entry, unrelated to the current
correctly locked migration, but hit by remove_anon_migration_ptes as it
checks an address in each vma of the anon_vma list.
Such an orphan may be left behind if an earlier migration raced with fork:
copy_one_pte can duplicate a migration entry from parent to child, after
remove_anon_migration_ptes has checked the child vma, but before it has
removed it from the parent vma. (If the process were later to fault on this
orphaned entry, it would hit the same BUG from migration_entry_wait.)
This could be fixed by locking anon_vma in copy_one_pte, but we'd rather
not. There's no such problem with file pages, because vma_prio_tree_add
adds child vma after parent vma, and the page table locking at each end is
enough to serialize. Follow that example with anon_vma: add new vmas to the
tail instead of the head.
(There's no corresponding problem when inserting migration entries,
because a missed pte will leave the page count and mapcount high, which is
allowed for. And there's no corresponding problem when migrating via swap,
because a leftover swap entry will be correctly faulted. But the swapless
method has no refcounting of its entries.)
From: Ingo Molnar <mingo@elte.hu>
pte_unmap_unlock() takes the pte pointer as an argument.
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration, gcc has tried to exec
a pointer instead of a string: smells like COW mappings are not being
properly write-protected on fork.
The protection in copy_one_pte looks very convincing, until at last you
realize that the second arg to make_migration_entry is a boolean "write",
and SWP_MIGRATION_READ is 30.
Anyway, it's better done like in change_pte_range, using
is_write_migration_entry and make_migration_entry_read.
From: Hugh Dickins <hugh@veritas.com>
Remove unnecessary obfuscation from sys_swapon's range check on swap type,
which blew up causing memory corruption once swapless migration made
MAX_SWAPFILES no longer 2 ^ MAX_SWAPFILES_SHIFT.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Christoph Lameter <clameter@engr.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
From: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23 05:03:35 -04:00
|
|
|
/*
|
|
|
|
* A protection check is difficult so
|
|
|
|
* just be safe and disable write
|
|
|
|
*/
|
|
|
|
make_migration_entry_read(&entry);
|
2013-10-16 16:46:51 -04:00
|
|
|
newpte = swp_entry_to_pte(entry);
|
|
|
|
if (pte_swp_soft_dirty(oldpte))
|
|
|
|
newpte = pte_swp_mksoft_dirty(newpte);
|
|
|
|
set_pte_at(mm, addr, pte, newpte);
|
2013-10-07 06:28:48 -04:00
|
|
|
|
|
|
|
pages++;
|
[PATCH] Swapless page migration: add R/W migration entries
Implement read/write migration ptes
We take the upper two swapfiles for the two types of migration ptes and define
a series of macros in swapops.h.
The VM is modified to handle the migration entries. migration entries can
only be encountered when the page they are pointing to is locked. This limits
the number of places one has to fix. We also check in copy_pte_range and in
mprotect_pte_range() for migration ptes.
We check for migration ptes in do_swap_cache and call a function that will
then wait on the page lock. This allows us to effectively stop all accesses
to apge.
Migration entries are created by try_to_unmap if called for migration and
removed by local functions in migrate.c
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration (I've no NUMA, just
hacking it up to migrate recklessly while running load), I've hit the
BUG_ON(!PageLocked(p)) in migration_entry_to_page.
This comes from an orphaned migration entry, unrelated to the current
correctly locked migration, but hit by remove_anon_migration_ptes as it
checks an address in each vma of the anon_vma list.
Such an orphan may be left behind if an earlier migration raced with fork:
copy_one_pte can duplicate a migration entry from parent to child, after
remove_anon_migration_ptes has checked the child vma, but before it has
removed it from the parent vma. (If the process were later to fault on this
orphaned entry, it would hit the same BUG from migration_entry_wait.)
This could be fixed by locking anon_vma in copy_one_pte, but we'd rather
not. There's no such problem with file pages, because vma_prio_tree_add
adds child vma after parent vma, and the page table locking at each end is
enough to serialize. Follow that example with anon_vma: add new vmas to the
tail instead of the head.
(There's no corresponding problem when inserting migration entries,
because a missed pte will leave the page count and mapcount high, which is
allowed for. And there's no corresponding problem when migrating via swap,
because a leftover swap entry will be correctly faulted. But the swapless
method has no refcounting of its entries.)
From: Ingo Molnar <mingo@elte.hu>
pte_unmap_unlock() takes the pte pointer as an argument.
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration, gcc has tried to exec
a pointer instead of a string: smells like COW mappings are not being
properly write-protected on fork.
The protection in copy_one_pte looks very convincing, until at last you
realize that the second arg to make_migration_entry is a boolean "write",
and SWP_MIGRATION_READ is 30.
Anyway, it's better done like in change_pte_range, using
is_write_migration_entry and make_migration_entry_read.
From: Hugh Dickins <hugh@veritas.com>
Remove unnecessary obfuscation from sys_swapon's range check on swap type,
which blew up causing memory corruption once swapless migration made
MAX_SWAPFILES no longer 2 ^ MAX_SWAPFILES_SHIFT.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Christoph Lameter <clameter@engr.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
From: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23 05:03:35 -04:00
|
|
|
}
|
2005-04-16 18:20:36 -04:00
|
|
|
}
|
|
|
|
} while (pte++, addr += PAGE_SIZE, addr != end);
|
2006-10-01 02:29:33 -04:00
|
|
|
arch_leave_lazy_mmu_mode();
|
2005-10-29 21:16:27 -04:00
|
|
|
pte_unmap_unlock(pte - 1, ptl);
|
2012-11-18 21:14:23 -05:00
|
|
|
|
|
|
|
return pages;
|
2005-04-16 18:20:36 -04:00
|
|
|
}
|
|
|
|
|
2012-12-18 17:23:17 -05:00
|
|
|
static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
|
|
|
|
pud_t *pud, unsigned long addr, unsigned long end,
|
|
|
|
pgprot_t newprot, int dirty_accountable, int prot_numa)
|
2005-04-16 18:20:36 -04:00
|
|
|
{
|
|
|
|
pmd_t *pmd;
|
2014-04-07 18:36:57 -04:00
|
|
|
struct mm_struct *mm = vma->vm_mm;
|
2005-04-16 18:20:36 -04:00
|
|
|
unsigned long next;
|
2012-11-18 21:14:23 -05:00
|
|
|
unsigned long pages = 0;
|
2013-11-12 18:08:32 -05:00
|
|
|
unsigned long nr_huge_updates = 0;
|
2014-04-07 18:36:57 -04:00
|
|
|
unsigned long mni_start = 0;
|
2005-04-16 18:20:36 -04:00
|
|
|
|
|
|
|
pmd = pmd_offset(pud, addr);
|
|
|
|
do {
|
2013-10-07 06:29:14 -04:00
|
|
|
unsigned long this_pages;
|
|
|
|
|
2005-04-16 18:20:36 -04:00
|
|
|
next = pmd_addr_end(addr, end);
|
2014-04-07 18:36:55 -04:00
|
|
|
if (!pmd_trans_huge(*pmd) && pmd_none_or_clear_bad(pmd))
|
|
|
|
continue;
|
2014-04-07 18:36:57 -04:00
|
|
|
|
|
|
|
/* invoke the mmu notifier if the pmd is populated */
|
|
|
|
if (!mni_start) {
|
|
|
|
mni_start = addr;
|
|
|
|
mmu_notifier_invalidate_range_start(mm, mni_start, end);
|
|
|
|
}
|
|
|
|
|
2011-01-13 18:47:04 -05:00
|
|
|
if (pmd_trans_huge(*pmd)) {
|
|
|
|
if (next - addr != HPAGE_PMD_SIZE)
|
2012-12-12 16:50:59 -05:00
|
|
|
split_huge_page_pmd(vma, addr, pmd);
|
2013-10-07 06:28:49 -04:00
|
|
|
else {
|
|
|
|
int nr_ptes = change_huge_pmd(vma, pmd, addr,
|
|
|
|
newprot, prot_numa);
|
|
|
|
|
|
|
|
if (nr_ptes) {
|
2013-11-12 18:08:32 -05:00
|
|
|
if (nr_ptes == HPAGE_PMD_NR) {
|
|
|
|
pages += HPAGE_PMD_NR;
|
|
|
|
nr_huge_updates++;
|
|
|
|
}
|
2014-04-07 18:36:56 -04:00
|
|
|
|
|
|
|
/* huge pmd was handled */
|
2013-10-07 06:28:49 -04:00
|
|
|
continue;
|
|
|
|
}
|
2012-11-18 21:14:23 -05:00
|
|
|
}
|
2014-04-07 18:36:55 -04:00
|
|
|
/* fall through, the trans huge pmd just split */
|
2011-01-13 18:47:04 -05:00
|
|
|
}
|
2013-10-07 06:29:14 -04:00
|
|
|
this_pages = change_pte_range(vma, pmd, addr, next, newprot,
|
2013-10-07 06:29:25 -04:00
|
|
|
dirty_accountable, prot_numa);
|
2013-10-07 06:29:14 -04:00
|
|
|
pages += this_pages;
|
2005-04-16 18:20:36 -04:00
|
|
|
} while (pmd++, addr = next, addr != end);
|
2012-11-18 21:14:23 -05:00
|
|
|
|
2014-04-07 18:36:57 -04:00
|
|
|
if (mni_start)
|
|
|
|
mmu_notifier_invalidate_range_end(mm, mni_start, end);
|
|
|
|
|
2013-11-12 18:08:32 -05:00
|
|
|
if (nr_huge_updates)
|
|
|
|
count_vm_numa_events(NUMA_HUGE_PTE_UPDATES, nr_huge_updates);
|
2012-11-18 21:14:23 -05:00
|
|
|
return pages;
|
2005-04-16 18:20:36 -04:00
|
|
|
}
|
|
|
|
|
2012-12-18 17:23:17 -05:00
|
|
|
static inline unsigned long change_pud_range(struct vm_area_struct *vma,
|
|
|
|
pgd_t *pgd, unsigned long addr, unsigned long end,
|
|
|
|
pgprot_t newprot, int dirty_accountable, int prot_numa)
|
2005-04-16 18:20:36 -04:00
|
|
|
{
|
|
|
|
pud_t *pud;
|
|
|
|
unsigned long next;
|
2012-11-18 21:14:23 -05:00
|
|
|
unsigned long pages = 0;
|
2005-04-16 18:20:36 -04:00
|
|
|
|
|
|
|
pud = pud_offset(pgd, addr);
|
|
|
|
do {
|
|
|
|
next = pud_addr_end(addr, end);
|
|
|
|
if (pud_none_or_clear_bad(pud))
|
|
|
|
continue;
|
2012-11-18 21:14:23 -05:00
|
|
|
pages += change_pmd_range(vma, pud, addr, next, newprot,
|
2012-10-25 08:16:32 -04:00
|
|
|
dirty_accountable, prot_numa);
|
2005-04-16 18:20:36 -04:00
|
|
|
} while (pud++, addr = next, addr != end);
|
2012-11-18 21:14:23 -05:00
|
|
|
|
|
|
|
return pages;
|
2005-04-16 18:20:36 -04:00
|
|
|
}
|
|
|
|
|
2012-11-18 21:14:23 -05:00
|
|
|
static unsigned long change_protection_range(struct vm_area_struct *vma,
|
2006-09-26 02:30:59 -04:00
|
|
|
unsigned long addr, unsigned long end, pgprot_t newprot,
|
2012-10-25 08:16:32 -04:00
|
|
|
int dirty_accountable, int prot_numa)
|
2005-04-16 18:20:36 -04:00
|
|
|
{
|
|
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
|
|
pgd_t *pgd;
|
|
|
|
unsigned long next;
|
|
|
|
unsigned long start = addr;
|
2012-11-18 21:14:23 -05:00
|
|
|
unsigned long pages = 0;
|
2005-04-16 18:20:36 -04:00
|
|
|
|
|
|
|
BUG_ON(addr >= end);
|
|
|
|
pgd = pgd_offset(mm, addr);
|
|
|
|
flush_cache_range(vma, addr, end);
|
mm: fix TLB flush race between migration, and change_protection_range
There are a few subtle races, between change_protection_range (used by
mprotect and change_prot_numa) on one side, and NUMA page migration and
compaction on the other side.
The basic race is that there is a time window between when the PTE gets
made non-present (PROT_NONE or NUMA), and the TLB is flushed.
During that time, a CPU may continue writing to the page.
This is fine most of the time, however compaction or the NUMA migration
code may come in, and migrate the page away.
When that happens, the CPU may continue writing, through the cached
translation, to what is no longer the current memory location of the
process.
This only affects x86, which has a somewhat optimistic pte_accessible.
All other architectures appear to be safe, and will either always flush,
or flush whenever there is a valid mapping, even with no permissions
(SPARC).
The basic race looks like this:
CPU A CPU B CPU C
load TLB entry
make entry PTE/PMD_NUMA
fault on entry
read/write old page
start migrating page
change PTE/PMD to new page
read/write old page [*]
flush TLB
reload TLB from new entry
read/write new page
lose data
[*] the old page may belong to a new user at this point!
The obvious fix is to flush remote TLB entries, by making sure that
pte_accessible aware of the fact that PROT_NONE and PROT_NUMA memory may
still be accessible if there is a TLB flush pending for the mm.
This should fix both NUMA migration and compaction.
[mgorman@suse.de: fix build]
Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Alex Thorlton <athorlton@sgi.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-12-18 20:08:44 -05:00
|
|
|
set_tlb_flush_pending(mm);
|
2005-04-16 18:20:36 -04:00
|
|
|
do {
|
|
|
|
next = pgd_addr_end(addr, end);
|
|
|
|
if (pgd_none_or_clear_bad(pgd))
|
|
|
|
continue;
|
2012-11-18 21:14:23 -05:00
|
|
|
pages += change_pud_range(vma, pgd, addr, next, newprot,
|
2012-10-25 08:16:32 -04:00
|
|
|
dirty_accountable, prot_numa);
|
2005-04-16 18:20:36 -04:00
|
|
|
} while (pgd++, addr = next, addr != end);
|
2012-11-18 21:14:23 -05:00
|
|
|
|
2012-11-18 21:14:24 -05:00
|
|
|
/* Only flush the TLB if we actually modified any entries: */
|
|
|
|
if (pages)
|
|
|
|
flush_tlb_range(vma, start, end);
|
mm: fix TLB flush race between migration, and change_protection_range
There are a few subtle races, between change_protection_range (used by
mprotect and change_prot_numa) on one side, and NUMA page migration and
compaction on the other side.
The basic race is that there is a time window between when the PTE gets
made non-present (PROT_NONE or NUMA), and the TLB is flushed.
During that time, a CPU may continue writing to the page.
This is fine most of the time, however compaction or the NUMA migration
code may come in, and migrate the page away.
When that happens, the CPU may continue writing, through the cached
translation, to what is no longer the current memory location of the
process.
This only affects x86, which has a somewhat optimistic pte_accessible.
All other architectures appear to be safe, and will either always flush,
or flush whenever there is a valid mapping, even with no permissions
(SPARC).
The basic race looks like this:
CPU A CPU B CPU C
load TLB entry
make entry PTE/PMD_NUMA
fault on entry
read/write old page
start migrating page
change PTE/PMD to new page
read/write old page [*]
flush TLB
reload TLB from new entry
read/write new page
lose data
[*] the old page may belong to a new user at this point!
The obvious fix is to flush remote TLB entries, by making sure that
pte_accessible aware of the fact that PROT_NONE and PROT_NUMA memory may
still be accessible if there is a TLB flush pending for the mm.
This should fix both NUMA migration and compaction.
[mgorman@suse.de: fix build]
Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Alex Thorlton <athorlton@sgi.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-12-18 20:08:44 -05:00
|
|
|
clear_tlb_flush_pending(mm);
|
2012-11-18 21:14:23 -05:00
|
|
|
|
|
|
|
return pages;
|
|
|
|
}
|
|
|
|
|
|
|
|
unsigned long change_protection(struct vm_area_struct *vma, unsigned long start,
|
|
|
|
unsigned long end, pgprot_t newprot,
|
2012-10-25 08:16:32 -04:00
|
|
|
int dirty_accountable, int prot_numa)
|
2012-11-18 21:14:23 -05:00
|
|
|
{
|
|
|
|
unsigned long pages;
|
|
|
|
|
|
|
|
if (is_vm_hugetlb_page(vma))
|
|
|
|
pages = hugetlb_change_protection(vma, start, end, newprot);
|
|
|
|
else
|
2012-10-25 08:16:32 -04:00
|
|
|
pages = change_protection_range(vma, start, end, newprot, dirty_accountable, prot_numa);
|
2012-11-18 21:14:23 -05:00
|
|
|
|
|
|
|
return pages;
|
2005-04-16 18:20:36 -04:00
|
|
|
}
|
|
|
|
|
2007-07-19 04:48:16 -04:00
|
|
|
int
|
2005-04-16 18:20:36 -04:00
|
|
|
mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
|
|
|
|
unsigned long start, unsigned long end, unsigned long newflags)
|
|
|
|
{
|
|
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
|
|
unsigned long oldflags = vma->vm_flags;
|
|
|
|
long nrpages = (end - start) >> PAGE_SHIFT;
|
|
|
|
unsigned long charged = 0;
|
|
|
|
pgoff_t pgoff;
|
|
|
|
int error;
|
2006-09-26 02:30:59 -04:00
|
|
|
int dirty_accountable = 0;
|
2005-04-16 18:20:36 -04:00
|
|
|
|
|
|
|
if (newflags == oldflags) {
|
|
|
|
*pprev = vma;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we make a private mapping writable we increase our commit;
|
|
|
|
* but (without finer accounting) cannot reduce our commit if we
|
2009-02-10 09:02:27 -05:00
|
|
|
* make it unwritable again. hugetlb mapping were accounted for
|
|
|
|
* even if read-only so there is no need to account for them here
|
2005-04-16 18:20:36 -04:00
|
|
|
*/
|
|
|
|
if (newflags & VM_WRITE) {
|
2009-02-10 09:02:27 -05:00
|
|
|
if (!(oldflags & (VM_ACCOUNT|VM_WRITE|VM_HUGETLB|
|
2008-07-24 00:27:28 -04:00
|
|
|
VM_SHARED|VM_NORESERVE))) {
|
2005-04-16 18:20:36 -04:00
|
|
|
charged = nrpages;
|
2012-02-12 22:58:52 -05:00
|
|
|
if (security_vm_enough_memory_mm(mm, charged))
|
2005-04-16 18:20:36 -04:00
|
|
|
return -ENOMEM;
|
|
|
|
newflags |= VM_ACCOUNT;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* First try to merge with previous and/or next vma.
|
|
|
|
*/
|
|
|
|
pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
|
|
|
|
*pprev = vma_merge(mm, *pprev, start, end, newflags,
|
|
|
|
vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma));
|
|
|
|
if (*pprev) {
|
|
|
|
vma = *pprev;
|
|
|
|
goto success;
|
|
|
|
}
|
|
|
|
|
|
|
|
*pprev = vma;
|
|
|
|
|
|
|
|
if (start != vma->vm_start) {
|
|
|
|
error = split_vma(mm, vma, start, 1);
|
|
|
|
if (error)
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (end != vma->vm_end) {
|
|
|
|
error = split_vma(mm, vma, end, 0);
|
|
|
|
if (error)
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
|
|
|
success:
|
|
|
|
/*
|
|
|
|
* vm_flags and vm_page_prot are protected by the mmap_sem
|
|
|
|
* held in write mode.
|
|
|
|
*/
|
|
|
|
vma->vm_flags = newflags;
|
mm: softdirty: enable write notifications on VMAs after VM_SOFTDIRTY cleared
For VMAs that don't want write notifications, PTEs created for read faults
have their write bit set. If the read fault happens after VM_SOFTDIRTY is
cleared, then the PTE's softdirty bit will remain clear after subsequent
writes.
Here's a simple code snippet to demonstrate the bug:
char* m = mmap(NULL, getpagesize(), PROT_READ | PROT_WRITE,
MAP_ANONYMOUS | MAP_SHARED, -1, 0);
system("echo 4 > /proc/$PPID/clear_refs"); /* clear VM_SOFTDIRTY */
assert(*m == '\0'); /* new PTE allows write access */
assert(!soft_dirty(x));
*m = 'x'; /* should dirty the page */
assert(soft_dirty(x)); /* fails */
With this patch, write notifications are enabled when VM_SOFTDIRTY is
cleared. Furthermore, to avoid unnecessary faults, write notifications
are disabled when VM_SOFTDIRTY is set.
As a side effect of enabling and disabling write notifications with
care, this patch fixes a bug in mprotect where vm_page_prot bits set by
drivers were zapped on mprotect. An analogous bug was fixed in mmap by
commit c9d0bf241451 ("mm: uncached vma support with writenotify").
Signed-off-by: Peter Feiner <pfeiner@google.com>
Reported-by: Peter Feiner <pfeiner@google.com>
Suggested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@parallels.com>
Cc: Jamie Liu <jamieliu@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-10-13 18:55:46 -04:00
|
|
|
dirty_accountable = vma_wants_writenotify(vma);
|
|
|
|
vma_set_page_prot(vma);
|
2006-09-26 02:30:57 -04:00
|
|
|
|
2012-12-18 17:23:17 -05:00
|
|
|
change_protection(vma, start, end, vma->vm_page_prot,
|
|
|
|
dirty_accountable, 0);
|
2012-11-18 21:14:23 -05:00
|
|
|
|
2005-10-29 21:15:56 -04:00
|
|
|
vm_stat_account(mm, oldflags, vma->vm_file, -nrpages);
|
|
|
|
vm_stat_account(mm, newflags, vma->vm_file, nrpages);
|
2010-11-08 14:29:07 -05:00
|
|
|
perf_event_mmap(vma);
|
2005-04-16 18:20:36 -04:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
fail:
|
|
|
|
vm_unacct_memory(charged);
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
2009-01-14 08:14:15 -05:00
|
|
|
SYSCALL_DEFINE3(mprotect, unsigned long, start, size_t, len,
|
|
|
|
unsigned long, prot)
|
2005-04-16 18:20:36 -04:00
|
|
|
{
|
|
|
|
unsigned long vm_flags, nstart, end, tmp, reqprot;
|
|
|
|
struct vm_area_struct *vma, *prev;
|
|
|
|
int error = -EINVAL;
|
|
|
|
const int grows = prot & (PROT_GROWSDOWN|PROT_GROWSUP);
|
|
|
|
prot &= ~(PROT_GROWSDOWN|PROT_GROWSUP);
|
|
|
|
if (grows == (PROT_GROWSDOWN|PROT_GROWSUP)) /* can't be both */
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (start & ~PAGE_MASK)
|
|
|
|
return -EINVAL;
|
|
|
|
if (!len)
|
|
|
|
return 0;
|
|
|
|
len = PAGE_ALIGN(len);
|
|
|
|
end = start + len;
|
|
|
|
if (end <= start)
|
|
|
|
return -ENOMEM;
|
2008-07-07 10:28:51 -04:00
|
|
|
if (!arch_validate_prot(prot))
|
2005-04-16 18:20:36 -04:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
reqprot = prot;
|
|
|
|
/*
|
|
|
|
* Does the application expect PROT_READ to imply PROT_EXEC:
|
|
|
|
*/
|
2006-06-23 05:03:23 -04:00
|
|
|
if ((prot & PROT_READ) && (current->personality & READ_IMPLIES_EXEC))
|
2005-04-16 18:20:36 -04:00
|
|
|
prot |= PROT_EXEC;
|
|
|
|
|
|
|
|
vm_flags = calc_vm_prot_bits(prot);
|
|
|
|
|
|
|
|
down_write(¤t->mm->mmap_sem);
|
|
|
|
|
2012-03-06 21:23:36 -05:00
|
|
|
vma = find_vma(current->mm, start);
|
2005-04-16 18:20:36 -04:00
|
|
|
error = -ENOMEM;
|
|
|
|
if (!vma)
|
|
|
|
goto out;
|
2012-03-06 21:23:36 -05:00
|
|
|
prev = vma->vm_prev;
|
2005-04-16 18:20:36 -04:00
|
|
|
if (unlikely(grows & PROT_GROWSDOWN)) {
|
|
|
|
if (vma->vm_start >= end)
|
|
|
|
goto out;
|
|
|
|
start = vma->vm_start;
|
|
|
|
error = -EINVAL;
|
|
|
|
if (!(vma->vm_flags & VM_GROWSDOWN))
|
|
|
|
goto out;
|
2012-12-18 17:23:17 -05:00
|
|
|
} else {
|
2005-04-16 18:20:36 -04:00
|
|
|
if (vma->vm_start > start)
|
|
|
|
goto out;
|
|
|
|
if (unlikely(grows & PROT_GROWSUP)) {
|
|
|
|
end = vma->vm_end;
|
|
|
|
error = -EINVAL;
|
|
|
|
if (!(vma->vm_flags & VM_GROWSUP))
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (start > vma->vm_start)
|
|
|
|
prev = vma;
|
|
|
|
|
|
|
|
for (nstart = start ; ; ) {
|
|
|
|
unsigned long newflags;
|
|
|
|
|
2012-12-18 17:23:17 -05:00
|
|
|
/* Here we know that vma->vm_start <= nstart < vma->vm_end. */
|
2005-04-16 18:20:36 -04:00
|
|
|
|
2012-12-18 17:23:17 -05:00
|
|
|
newflags = vm_flags;
|
|
|
|
newflags |= (vma->vm_flags & ~(VM_READ | VM_WRITE | VM_EXEC));
|
2005-04-16 18:20:36 -04:00
|
|
|
|
2005-09-21 12:55:39 -04:00
|
|
|
/* newflags >> 4 shift VM_MAY% in place of VM_% */
|
|
|
|
if ((newflags & ~(newflags >> 4)) & (VM_READ | VM_WRITE | VM_EXEC)) {
|
2005-04-16 18:20:36 -04:00
|
|
|
error = -EACCES;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
error = security_file_mprotect(vma, reqprot, prot);
|
|
|
|
if (error)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
tmp = vma->vm_end;
|
|
|
|
if (tmp > end)
|
|
|
|
tmp = end;
|
|
|
|
error = mprotect_fixup(vma, &prev, nstart, tmp, newflags);
|
|
|
|
if (error)
|
|
|
|
goto out;
|
|
|
|
nstart = tmp;
|
|
|
|
|
|
|
|
if (nstart < prev->vm_end)
|
|
|
|
nstart = prev->vm_end;
|
|
|
|
if (nstart >= end)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
vma = prev->vm_next;
|
|
|
|
if (!vma || vma->vm_start != nstart) {
|
|
|
|
error = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
out:
|
|
|
|
up_write(¤t->mm->mmap_sem);
|
|
|
|
return error;
|
|
|
|
}
|