android_kernel_xiaomi_sm8350/arch/x86/lib
Alexander van Heukelum 6fd92b63d0 x86: change x86 to use generic find_next_bit
The versions with inline assembly are in fact slower on the machines I
tested them on (in userspace) (Athlon XP 2800+, p4-like Xeon 2.8GHz, AMD
Opteron 270). The i386-version needed a fix similar to 06024f21 to avoid
crashing the benchmark.

Benchmark using: gcc -fomit-frame-pointer -Os. For each bitmap size
1...512, for each possible bitmap with one bit set, for each possible
offset: find the position of the first bit starting at offset. If you
follow ;). Times include setup of the bitmap and checking of the
results.

		Athlon		Xeon		Opteron 32/64bit
x86-specific:	0m3.692s	0m2.820s	0m3.196s / 0m2.480s
generic:	0m2.622s	0m1.662s	0m2.100s / 0m1.572s

If the bitmap size is not a multiple of BITS_PER_LONG, and no set
(cleared) bit is found, find_next_bit (find_next_zero_bit) returns a
value outside of the range [0, size]. The generic version always returns
exactly size. The generic version also uses unsigned long everywhere,
while the x86 versions use a mishmash of int, unsigned (int), long and
unsigned long.

Using the generic version does give a slightly bigger kernel, though.

defconfig:	   text    data     bss     dec     hex filename
x86-specific:	4738555  481232  626688 5846475  5935cb vmlinux (32 bit)
generic:	4738621  481232  626688 5846541  59360d vmlinux (32 bit)
x86-specific:	5392395  846568  724424 6963387  6a40bb vmlinux (64 bit)
generic:	5392458  846568  724424 6963450  6a40fa vmlinux (64 bit)

Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-26 19:21:16 +02:00
..
bitops_64.c x86: change x86 to use generic find_next_bit 2008-04-26 19:21:16 +02:00
checksum_32.S
clear_page_64.S
copy_page_64.S
copy_user_64.S
copy_user_nocache_64.S x86: fence oostores on 64-bit 2007-10-12 18:41:21 -07:00
csum-copy_64.S
csum-partial_64.c
csum-wrappers_64.c x86: clean up csum-wrappers_64.c some more 2008-02-19 16:18:32 +01:00
delay_32.c read_current_timer() cleanups 2008-02-06 10:41:02 -08:00
delay_64.c read_current_timer() cleanups 2008-02-06 10:41:02 -08:00
getuser_32.S
getuser_64.S
io_64.c x86: coding style fixes in arch/x86/lib/io_64.c 2008-02-19 16:18:32 +01:00
iomap_copy_64.S
Makefile x86: change x86 to use generic find_next_bit 2008-04-26 19:21:16 +02:00
memcpy_32.c x86: coding style fixes to arch/x86/lib/memcpy_32.c 2008-04-17 17:40:49 +02:00
memcpy_64.S
memmove_64.c x86: coding style fixes to arch/x86/lib/memmove_64.c 2008-04-17 17:40:48 +02:00
memset_64.S
mmx_32.c x86: clean up mmx_32.c 2008-04-17 17:40:47 +02:00
msr-on-cpu.c i386: simplify smp_call_function_single() call sequence in msr-on-cpu 2007-10-17 20:16:20 +02:00
putuser_32.S
putuser_64.S
rwlock_64.S x86: rename .i assembler includes to .h 2007-10-17 20:16:29 +02:00
semaphore_32.S Generic semaphore implementation 2008-04-17 10:42:34 -04:00
string_32.c x86: coding style fixes to arch/x86/lib/string_32.c 2008-04-17 17:40:48 +02:00
strstr_32.c x86: coding style fixes to arch/x86/lib/strstr_3 2008-04-17 17:40:49 +02:00
thunk_64.S Generic semaphore implementation 2008-04-17 10:42:34 -04:00
usercopy_32.c x86: coding style fixes to arch/x86/lib/usercopy_32.c 2008-04-17 17:40:51 +02:00
usercopy_64.c x86: use _ASM_EXTABLE macro in arch/x86/lib/usercopy_64.c 2008-02-04 16:47:57 +01:00