25e5566ed3
For the case where the source is not aligned modulo 8 we don't use load-twins to suck the data in and this kills performance since normal loads allocate in the L1 cache (unlike load-twin) and thus big memcpys swipe the entire L1 D-cache. We need to allocate a register window to implement this properly, but that actually simplifies a lot of things as a nice side-effect. Signed-off-by: David S. Miller <davem@davemloft.net> |
||
---|---|---|
.. | ||
atomic.S | ||
bitops.S | ||
bzero.S | ||
checksum.S | ||
clear_page.S | ||
copy_in_user.S | ||
copy_page.S | ||
csum_copy_from_user.S | ||
csum_copy_to_user.S | ||
csum_copy.S | ||
GENbzero.S | ||
GENcopy_from_user.S | ||
GENcopy_to_user.S | ||
GENmemcpy.S | ||
GENpage.S | ||
GENpatch.S | ||
iomap.c | ||
ipcsum.S | ||
Makefile | ||
mcount.S | ||
memcmp.S | ||
memmove.S | ||
memscan.S | ||
NG2copy_from_user.S | ||
NG2copy_to_user.S | ||
NG2memcpy.S | ||
NG2page.S | ||
NG2patch.S | ||
NGbzero.S | ||
NGcopy_from_user.S | ||
NGcopy_to_user.S | ||
NGmemcpy.S | ||
NGpage.S | ||
NGpatch.S | ||
PeeCeeI.c | ||
rwsem.S | ||
strlen_user.S | ||
strlen.S | ||
strncmp.S | ||
strncpy_from_user.S | ||
U1copy_from_user.S | ||
U1copy_to_user.S | ||
U1memcpy.S | ||
U3copy_from_user.S | ||
U3copy_to_user.S | ||
U3memcpy.S | ||
U3patch.S | ||
user_fixup.c | ||
VISsave.S | ||
xor.S |