android_kernel_xiaomi_sm8350/arch/x86/lib
Linus Torvalds 9063c61fd5 x86, 64-bit: Clean up user address masking
The discussion about using "access_ok()" in get_user_pages_fast() (see
commit 7f81890687: "x86: don't use
'access_ok()' as a range check in get_user_pages_fast()" for details and
end result), made us notice that x86-64 was really being very sloppy
about virtual address checking.

So be way more careful and straightforward about masking x86-64 virtual
addresses:

 - All the VIRTUAL_MASK* variants now cover half of the address
   space, it's not like we can use the full mask on a signed
   integer, and the larger mask just invites mistakes when
   applying it to either half of the 48-bit address space.

 - /proc/kcore's kc_offset_to_vaddr() becomes a lot more
   obvious when it transforms a file offset into a
   (kernel-half) virtual address.

 - Unify/simplify the 32-bit and 64-bit USER_DS definition to
   be based on TASK_SIZE_MAX.

This cleanup and more careful/obvious user virtual address checking also
uncovered a buglet in the x86-64 implementation of strnlen_user(): it
would do an "access_ok()" check on the whole potential area, even if the
string itself was much shorter, and thus return an error even for valid
strings. Our sloppy checking had hidden this.

So this fixes 'strnlen_user()' to do this properly, the same way we
already handled user strings in 'strncpy_from_user()'.  Namely by just
checking the first byte, and then relying on fault handling for the
rest.  That always works, since we impose a guard page that cannot be
mapped at the end of the user space address space (and even if we
didn't, we'd have the address space hole).

Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-06-20 15:40:00 -07:00
..
checksum_32.S
clear_page_64.S
copy_page_64.S
copy_user_64.S x86: wrong register was used in align macro 2008-07-30 10:10:39 -07:00
copy_user_nocache_64.S x86: wrong register was used in align macro 2008-07-30 10:10:39 -07:00
csum-copy_64.S
csum-partial_64.c x86: fix csum_partial() export 2008-05-13 19:38:47 +02:00
csum-wrappers_64.c x86: clean up csum-wrappers_64.c some more 2008-02-19 16:18:32 +01:00
delay.c x86: integrate delay functions. 2008-07-09 08:52:05 +02:00
getuser.S x86: use _types.h headers in asm where available 2009-02-13 11:35:01 -08:00
io_64.c x86: coding style fixes in arch/x86/lib/io_64.c 2008-02-19 16:18:32 +01:00
iomap_copy_64.S
Makefile x86: MSR: add a struct representation of an MSR 2009-06-10 12:18:42 +02:00
memcpy_32.c x86: coding style fixes to arch/x86/lib/memcpy_32.c 2008-04-17 17:40:49 +02:00
memcpy_64.S x86: memcpy, clean up 2009-03-12 12:21:17 +01:00
memmove_64.c x86: coding style fixes to arch/x86/lib/memmove_64.c 2008-04-17 17:40:48 +02:00
memset_64.S
mmx_32.c x86: clean up mmx_32.c 2008-04-17 17:40:47 +02:00
msr.c x86: MSR: add methods for writing of an MSR on several CPUs 2009-06-10 12:18:43 +02:00
putuser.S x86: merge putuser asm functions. 2008-07-09 09:14:13 +02:00
rwlock_64.S x86: rename .i assembler includes to .h 2007-10-17 20:16:29 +02:00
semaphore_32.S Generic semaphore implementation 2008-04-17 10:42:34 -04:00
string_32.c x86: coding style fixes to arch/x86/lib/string_32.c 2008-08-15 16:53:25 +02:00
strstr_32.c x86: coding style fixes to arch/x86/lib/strstr_32.c 2008-08-15 16:53:24 +02:00
thunk_32.S ftrace: trace irq disabled critical timings 2008-05-23 20:32:46 +02:00
thunk_64.S ftrace: trace irq disabled critical timings 2008-05-23 20:32:46 +02:00
usercopy_32.c x86: use early clobbers in usercopy*.c 2009-01-21 09:43:17 +01:00
usercopy_64.c x86, 64-bit: Clean up user address masking 2009-06-20 15:40:00 -07:00