Patch imlements full LDT handling in SKAS:
* UML holds it's own LDT table, used to deliver data on
modify_ldt(READ)
* UML disables the default_ldt, inherited from the host (SKAS3)
or resets LDT entries, set by host's clib and inherited in
SKAS0
* A new global variable skas_needs_stub is inserted, that
can be used to decide, whether stub-pages must be supported
or not.
* Uses the syscall-stub to replace missing PTRACE_LDT (therefore,
write_ldt_entry needs to be modified)
Signed-off-by: Bodo Stroesser <bstroesser@fujitsu-siemens.com>
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Cc: Paolo Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Christoph Lameter demonstrated very poor scalability on the SGI 512-way, with
a many-threaded application which concurrently initializes different parts of
a large anonymous area.
This patch corrects that, by using a separate spinlock per page table page, to
guard the page table entries in that page, instead of using the mm's single
page_table_lock. (But even then, page_table_lock is still used to guard page
table allocation, and anon_vma allocation.)
In this implementation, the spinlock is tucked inside the struct page of the
page table page: with a BUILD_BUG_ON in case it overflows - which it would in
the case of 32-bit PA-RISC with spinlock debugging enabled.
Splitting the lock is not quite for free: another cacheline access. Ideally,
I suppose we would use split ptlock only for multi-threaded processes on
multi-cpu machines; but deciding that dynamically would have its own costs.
So for now enable it by config, at some number of cpus - since the Kconfig
language doesn't support inequalities, let preprocessor compare that with
NR_CPUS. But I don't think it's worth being user-configurable: for good
testing of both split and unsplit configs, split now at 4 cpus, and perhaps
change that to 8 later.
There is a benefit even for singly threaded processes: kswapd can be attacking
one part of the mm while another part is busy faulting.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Convert those few architectures which are calling pud_alloc, pmd_alloc,
pte_alloc_map on a user mm, not to take the page_table_lock first, nor drop it
after. Each of these can continue to use pte_alloc_map, no need to change
over to pte_alloc_map_lock, they're neither racy nor swappable.
In the sparc64 io_remap_pfn_range, flush_tlb_range then falls outside of the
page_table_lock: that's okay, on sparc64 it's like flush_tlb_mm, and that has
always been called from outside of page_table_lock in dup_mmap.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
We were leaking pmd pages when 3_LEVEL_PGTABLES was enabled. This fixes that.
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This change enables SKAS0/SKAS3 to work with all combinations of /proc/mm and
PTRACE_FAULTINFO being available or not.
Also it changes the initialization of proc_mm and ptrace_faultinfo slightly,
to ease forcing SKAS0 on a patched host. Forcing UML to run without /proc/mm
or PTRACE_FAULTINFO by cmdline parameter can be implemented with a setup
resetting the related variable.
Signed-off-by: Bodo Stroesser <bstroesser@fujitsu-siemens.com>
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch implements the clone-stub mechanism, which allows skas0 to run
with proc_mm==0, even if the clib in UML uses modify_ldt.
Note: There is a bug in skas3.v7 host patch, that avoids UML-skas from
running properly on a SMP-box. In full skas3, I never really saw problems,
but in skas0 they showed up.
More commentary by jdike - What this patch does is makes sure that the host
parent of each new host process matches the UML parent of the corresponding
UML process. This ensures that any changed LDTs are inherited. This is
done by having clone actually called by the UML process from its stub,
rather than by the kernel. We have special syscall stubs that are loaded
onto the stub code page because that code must be completely
self-contained. These stubs are given C interfaces, and used like normal C
functions, but there are subtleties. Principally, we have to be careful
about stack variables in stub_clone_handler after the clone. The code is
written so that there aren't any - everything boils down to a fixed
address. If there were any locals, references to them after the clone
would be wrong because the stack just changed.
Signed-off-by: Bodo Stroesser <bstroesser@fujitsu-siemens.com>
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
UML has had two modes of operation - an insecure, slow mode (tt mode) in
which the kernel is mapped into every process address space which requires
no host kernel modifications, and a secure, faster mode (skas mode) in
which the UML kernel is in a separate host address space, which requires a
patch to the host kernel.
This patch implements something very close to skas mode for hosts which
don't support skas - I'm calling this skas0. It provides the security of
the skas host patch, and some of the performance gains.
The two main things that are provided by the skas patch, /proc/mm and
PTRACE_FAULTINFO, are implemented in a way that require no host patch.
For the remote address space changing stuff (mmap, munmap, and mprotect),
we set aside two pages in the process above its stack, one of which
contains a little bit of code which can call mmap et al.
To update the address space, the system call information (system call
number and arguments) are written to the stub page above the code. The
%esp is set to the beginning of the data, the %eip is set the the start of
the stub, and it repeatedly pops the information into its registers and
makes the system call until it sees a system call number of zero. This is
to amortize the cost of the context switch across multiple address space
updates.
When the updates are done, it SIGSTOPs itself, and the kernel process
continues what it was doing.
For a PTRACE_FAULTINFO replacement, we set up a SIGSEGV handler in the
child, and let it handle segfaults rather than nullifying them. The
handler is in the same page as the mmap stub. The second page is used as
the stack. The handler reads cr2 and err from the sigcontext, sticks them
at the base of the stack in a faultinfo struct, and SIGSTOPs itself. The
kernel then reads the faultinfo and handles the fault.
A complication on x86_64 is that this involves resetting the registers to
the segfault values when the process is inside the kill system call. This
breaks on x86_64 because %rcx will contain %rip because you tell SYSRET
where to return to by putting the value in %rcx. So, this corrupts $rcx on
return from the segfault. To work around this, I added an
arch_finish_segv, which on x86 does nothing, but which on x86_64 ptraces
the child back through the sigreturn. This causes %rcx to be restored by
sigreturn and avoids the corruption. Ultimately, I think I will replace
this with the trick of having it send itself a blocked signal which will be
unblocked by the sigreturn. This will allow it to be stopped just after
the sigreturn, and PTRACE_SYSCALLed without all the back-and-forth of
PTRACE_SYSCALLing it through sigreturn.
This runs on a stock host, so theoretically (and hopefully), tt mode isn't
needed any more. We need to make sure that this is better in every way
than tt mode, though. I'm concerned about the speed of address space
updates and page fault handling, since they involve extra round-trips to
the child. We can amortize the round-trip cost for large address space
updates by writing all of the operations to the data page and having the
child execute them all at the same time. This will help fork and exec, but
not page faults, since they involve only one page.
I can't think of any way to help page faults, except to add something like
PTRACE_FAULTINFO to the host. There is PTRACE_SIGINFO, but UML doesn't use
siginfo for SIGSEGV (or anything else) because there isn't enough
information in the siginfo struct to handle page faults (the faulting
operation type is missing). Adding that would make PTRACE_SIGINFO a usable
equivalent to PTRACE_FAULTINFO.
As for the code itself:
- The system call stub is in arch/um/kernel/sys-$(SUBARCH)/stub.S. It is
put in its own section of the binary along with stub_segv_handler in
arch/um/kernel/skas/process.c. This is manipulated with run_syscall_stub
in arch/um/kernel/skas/mem_user.c. syscall_stub will execute any system
call at all, but it's only used for mmap, munmap, and mprotect.
- The x86_64 stub calls sigreturn by hand rather than allowing the normal
sigreturn to happen, because the normal sigreturn is a SA_RESTORER in
UML's address space provided by libc. Needless to say, this is not
available in the child's address space. Also, it does a couple of odd
pops before that which restore the stack to the state it was in at the
time the signal handler was called.
- There is a new field in the arch mmu_context, which is now a union.
This is the pid to be manipulated rather than the /proc/mm file
descriptor. Code which deals with this now checks proc_mm to see whether
it should use the usual skas code or the new code.
- userspace_tramp is now used to create a new host process for every UML
process, rather than one per UML processor. It checks proc_mm and
ptrace_faultinfo to decide whether to map in the pages above its stack.
- start_userspace now makes CLONE_VM conditional on proc_mm since we need
separate address spaces now.
- switch_mm_skas now just sets userspace_pid[0] to the new pid rather
than PTRACE_SWITCH_MM. There is an addition to userspace which updates
its idea of the pid being manipulated each time around the loop. This is
important on exec, when the pid will change underneath userspace().
- The stub page has a pte, but it can't be mapped in using tlb_flush
because it is part of tlb_flush. This is why it's required for it to be
mapped in by userspace_tramp.
Other random things:
- The stub section in uml.lds.S is page aligned. This page is written
out to the backing vm file in setup_physmem because it is mapped from
there into user processes.
- There's some confusion with TASK_SIZE now that there are a couple of
extra pages that the process can't use. TASK_SIZE is considered by the
elf code to be the usable process memory, which is reasonable, so it is
decreased by two pages. This confuses the definition of
USER_PGDS_IN_LAST_PML4, making it too small because of the rounding down
of the uneven division. So we round it to the nearest PGDIR_SIZE rather
than the lower one.
- I added a missing PT_SYSCALL_ARG6_OFFSET macro.
- um_mmu.h was made into a userspace-usable file.
- proc_mm and ptrace_faultinfo are globals which say whether the host
supports these features.
- There is a bad interaction between the mm.nr_ptes check at the end of
exit_mmap, stack randomization, and skas0. exit_mmap will stop freeing
pages at the PGDIR_SIZE boundary after the last vma. If the stack isn't
on the last page table page, the last pte page won't be freed, as it
should be since the stub ptes are there, and exit_mmap will BUG because
there is an unfreed page. To get around this, TASK_SIZE is set to the
next lowest PGDIR_SIZE boundary and mm->nr_ptes is decremented after the
calls to init_stub_pte. This ensures that we know the process stack (and
all other process mappings) will be below the top page table page, and
thus we know that mm->nr_ptes will be one too many, and can be
decremented.
Things that need fixing:
- We may need better assurrences that the stub code is PIC.
- The stub pte is set up in init_new_context_skas.
- alloc_pgdir is probably the right place.
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!