In order to align the coding styles of ip_vs_zero_stats() and
its child-function ip_vs_zero_estimator(), clear ip_vs_stats
members explicitlty rather than doing a limited memset().
This was chosen over modifying ip_vs_zero_estimator() to use
memset() as it is more robust against changes in members
in the relevant structures. memset() would be prefered if
all members of the structure were to be cleared.
Cc: Sven Wegener <sven.wegener@stealer.net>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Sven Wegener <sven.wegener@stealer.net>
It's a global variable and automatically initialized to zero. And now we can
also initialize the lock at compile time.
Signed-off-by: Sven Wegener <sven.wegener@stealer.net>
Acked-by: Simon Horman <horms@verge.net.au>
There's no reason for dynamically allocating an estimator object for every
stats object. Directly embed an estimator object into every stats object and
switch to using the kernel-provided list implementation. This makes the code
much simpler and faster, as we do not need to traverse the list of all
estimators to find the one belonging to a stats object. There's no need to use
an rwlock, as we only have one reader. Also reorder the members of the
estimator structure slightly to avoid padding overhead. This can't be done
with the stats object as the members are currently copied to our user space
object via memcpy() and changing it would break ABI.
Signed-off-by: Sven Wegener <sven.wegener@stealer.net>
Acked-by: Simon Horman <horms@verge.net.au>
Being able to discard these functions saves a couple of bytes at runtime. The
cleanup functions can't be annotated with __exit as they are also called from
init functions.
Signed-off-by: Sven Wegener <sven.wegener@stealer.net>
Acked-by: Simon Horman <horms@verge.net.au>
No need to do it at runtime and this saves a couple of bytes in the text
section.
Signed-off-by: Sven Wegener <sven.wegener@stealer.net>
Acked-by: Simon Horman <horms@verge.net.au>
There is a slight chance for a deadlock in the estimator code. We can't call
del_timer_sync() while holding our lock, as the timer might be active and
spinning for the lock on another cpu. Work around this issue by using
try_to_del_timer_sync() and releasing the lock. We could actually delete the
timer outside of our lock, as the add and kill functions are only every called
from userspace via [gs]etsockopt() and are serialized by a mutex, but better
make this explicit.
Signed-off-by: Sven Wegener <sven.wegener@stealer.net>
Cc: stable <stable@kernel.org>
Acked-by: Simon Horman <horms@verge.net.au>
Commit 998e7a7680 ("ipvs: Use kthread_run()
instead of doing a double-fork via kernel_thread()") introduced a possible
deadlock in the sync code. We need to use the _bh versions for the lock, as the
lock is also accessed from a bottom half.
Signed-off-by: Sven Wegener <sven.wegener@stealer.net>
Acked-by: Simon Horman <horms@verge.net.au>
The socket lock is there to protect the normal UDP receive path.
Encapsulation UDP sockets don't need that protection. In fact
the locking is deadly for them as they may contain another UDP
packet within, possibly with the same addresses.
Also the nested bit was copied from TCP. TCP needs it because
of accept(2) spawning sockets. This simply doesn't apply to UDP
so I've removed it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Based upon bug reports by Stephen Hemminger.
We still had some cases using ->qdisc instead of ->qdisc_sleeping.
Also, qdisc_lookup() should return ingress qdiscs.
Signed-off-by: David S. Miller <davem@davemloft.net>
When an action is added several times with the same exact index
it gets deleted on every even-numbered attempt.
This fixes that issue.
Signed-off-by: Jamal Hadi Salim <hadi@cyberus.ca>
Signed-off-by: David S. Miller <davem@davemloft.net>
The indentation in part of tcp_minisocks makes it look like one of the if
statements is much more important than it actually is.
Signed-off-by: Adam Langley <agl@imperialviolet.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
The Bluetooth qualification for PAN demands testing with BNEP header
compression disabled. This is actually pretty stupid and the Linux
implementation outsmarts the test system since it compresses whenever
possible. So to pass qualification two need parameters have been added
to control the compression of source and destination headers.
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Currently a mesh node will not forward a multicast frame if it is not subscribed
to the specific multicast address. This patch addresses the issue and fixes mesh
multicast forwarding.
Signed-off-by: Luis Carlos Cobo <luisca@cozybit.com>
Acked-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Now we deal with mesh forwarding before the 802.11->802.3 conversion, thus
eliminating a few unnecessary steps. The next hop lookup is called from
ieee80211_master_start_xmit() instead of subif_start_xmit(). Until the next hop
is found, RA in the frame will be all zeroes for frames originating from the
device. For forwarded frames, RA will contain the TA of the received frame,
which will be necessary to send a path error if a next hop is not found.
Signed-off-by: Luis Carlos Cobo <luisca@cozybit.com>
Acked-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Sofar far pktgen have had a restriction to only use one device per kernel
thread. With the new multiqueue architecture this is no longer adequate.
The patch below is an effort to remove this by in pktgen configuration
adding a tag to the device name a la eth0@0 etc. The tag is used for
usual device config just as before. Also a new flag is introduced to mirror
queue_map with sending threads smp_processor_id() QUEUE_MAP_CPU.
An example: We use 4 CPU's to send to one 10g interface (eth0)
and we use the new tagging to send a mix of packet sizes, 64, 576 and
1500 bytes. Also we use TX queues according to smp_processor_id()
PGDEV=/proc/net/pktgen/kpktgend_0
pgset "add_device eth0@0"
PGDEV=/proc/net/pktgen/kpktgend_1
pgset "add_device eth0@1"
PGDEV=/proc/net/pktgen/kpktgend_2
pgset "add_device eth0@2"
PGDEV=/proc/net/pktgen/kpktgend_3
pgset "add_device eth0@3"
....
PGDEV=/proc/net/pktgen/eth0@0
pgset "pkt_size 64"
pgset "flag QUEUE_MAP_CPU"
PGDEV=/proc/net/pktgen/eth0@1
pgset "pkt_size 572"
pgset "flag QUEUE_MAP_CPU"
PGDEV=/proc/net/pktgen/eth0@2
pgset "pkt_size 1496"
PGDEV=/proc/net/pktgen/eth0@3
pgset "pkt_size 1496"
pgset "flag QUEUE_MAP_CPU"
Signed-off-by: Robert Olsson <robert.olsson@its.uu.se>
Signed-off-by: David S. Miller <davem@davemloft.net>
If a packet_type specifies an active slave to bonding and not just any
interface, allow it to receive frames that came in on that interface.
Signed-off-by: Joe Eykholt <jre@nuovasystems.com>
Signed-off-by: Jay Vosburgh <fubar@us.ibm.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Allow a packet_type that specifies the exact device to receive
even on an inactive bonding slave devices. This is important for some
L2 protocols such as LLDP and FCoE. This can eventually be used
for the bonding special cases as well.
Signed-off-by: Joe Eykholt <jre@nuovasystems.com>
Signed-off-by: Jay Vosburgh <fubar@us.ibm.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Otherwise subsequent changes need multiple return values.
Signed-off-by: Joe Eykholt <jre@nuovasystems.com>
Signed-off-by: Jay Vosburgh <fubar@us.ibm.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Here's a revised version, based on Herbert's comments, of a fix for
the ipv4-inner, ipv6-outer interfamily ipsec beet mode. It fixes the
network header adjustment during interfamily, as well as makes sure
that we reserve enough room for the new ipv6 header if we might have
something else as the inner family. Also, the ipv4 pseudo header
construction was added.
Signed-off-by: Joakim Koskela <jookos@gmail.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Here's a revised version, based on Herbert's comments, of a fix for
the ipv6-inner, ipv4-outer interfamily ipsec beet mode. It fixes the
network header adjustment in interfamily, and doesn't reserve space
for the pseudo header anymore when we have ipv6 as the inner family.
Signed-off-by: Joakim Koskela <jookos@gmail.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Starting with 9043476f72 ("[PATCH]
sanitize proc_sysctl") we have two netfilter releated problems:
- WARNING: at kernel/sysctl.c:1966 unregister_sysctl_table+0xcc/0x103(),
caused by wrong order of ini/fini calls
- net.netfilter is duplicated and has truncated set of records
Thanks to very useful guidelines from Al Viro, this patch fixes both
of them.
Signed-off-by: Krzysztof Piotr Oledzki <ole@ans.pl>
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch replaces dst_metric() with dst_mtu() in net/ipv6/route.c.
Signed-off-by: Rami Rosen <ramirose@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch replaces dst_metric() with dst_mtu() in net/ipv4/route.c.
Signed-off-by: Rami Rosen <ramirose@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since 49ffcf8f99 ("sysctl: update
sysctl_check_table") setting struct ctl_table.procname = NULL does no
longer work as it used to the way the AX.25 code is expecting it to
resulting in the AX.25 sysctl registration code to break if
CONFIG_AX25_DAMA_SLAVE was not set as in some distribution kernels.
Kernel releases from 2.6.24 are affected.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
dst_mac_count and src_mac_count patch from Eneas Hunguana
We have sent one mac address to much.
Signed-off-by: Robert Olsson <robert.olsson@its.uu.se>
Signed-off-by: David S. Miller <davem@davemloft.net>
Random flow generation has not worked. This fixes it.
Signed-off-by: Robert Olsson <robert.olsson@its.uu.se>
Signed-off-by: David S. Miller <davem@davemloft.net>
From: Stephen Hemminger <shemminger@vyatta.com>
Based upon original patch by Herbert Xu, which contained
the following problem description:
--------------------
When the forward delay is set to zero, we still delay the setting
of the forwarding state by one or possibly two timers depending
on whether STP is enabled. This could either turn out to be
instantaneous, or horribly slow depending on the load of the
machine.
As there is nothing preventing us from enabling forwarding straight
away, this patch eliminates this potential delay by executing the
code directly if the forward delay is zero.
The effect of this problem is that immediately after the carrier
comes on a port, the bridge will drop all packets received from
that port until it enters forwarding mode, thus causing unnecessary
packet loss.
Note that this patch doesn't fully remove the delay due to the
link watcher. We should also check the carrier state when we
are about to drop an incoming packet because the port is disabled.
But that's for another patch.
--------------------
This version of the fix takes a different approach, in that
it just does the state change directly.
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch fixes the following warning due to incompatible pointer
assignment:
net/bridge/br_netfilter.c: In function 'br_netfilter_rtable_init':
net/bridge/br_netfilter.c:116: warning: assignment from incompatible
pointer type
This warning is due to commit 4adf0af681
from July 30 (send correct MTU value in PMTU (revised)).
Signed-off-by: Rami Rosen <ramirose@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Patrick McHardy <kaber@trash.net> noticed that it would be nice to
handle NET_XMIT_BYPASS by NET_XMIT_SUCCESS with an internal qdisc flag
__NET_XMIT_BYPASS and to remove the mapping from dev_queue_xmit().
David Miller <davem@davemloft.net> spotted a serious bug in the first
version of this patch.
Signed-off-by: Jarek Poplawski <jarkao2@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Patrick McHardy <kaber@trash.net> noticed:
"The other problem that affects all qdiscs supporting actions is
TC_ACT_QUEUED/TC_ACT_STOLEN getting mapped to NET_XMIT_SUCCESS
even though the packet is not queued, corrupting upper qdiscs'
qlen counters."
and later explained:
"The reason why it translates it at all seems to be to not increase
the drops counter. Within a single qdisc this could be avoided by
other means easily, upper qdiscs would still increase the counter
when we return anything besides NET_XMIT_SUCCESS though.
This means we need a new NET_XMIT return value to indicate this to
the upper qdiscs. So I'd suggest to introduce NET_XMIT_STOLEN,
return that to upper qdiscs and translate it to NET_XMIT_SUCCESS
in dev_queue_xmit, similar to NET_XMIT_BYPASS."
David Miller <davem@davemloft.net> noticed:
"Maybe these NET_XMIT_* values being passed around should be a set of
bits. They could be composed of base meanings, combined with specific
attributes.
So you could say "NET_XMIT_DROP | __NET_XMIT_NO_DROP_COUNT"
The attributes get masked out by the top-level ->enqueue() caller,
such that the base meanings are the only thing that make their
way up into the stack. If it's only about communication within the
qdisc tree, let's simply code it that way."
This patch is trying to realize these ideas.
Signed-off-by: Jarek Poplawski <jarkao2@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds few HW bug fixes.
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Zhu Yi <yi.zhu@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
When joining an ad-hoc network, the user is currently required to specify
the channel. The network will not be joined otherwise, unless it happens
to be sitting on the currently active channel.
This patch implements automatic channel selection when the user has not
locked the interface onto a specific channel.
Signed-off-by: Daniel Drake <dsd@gentoo.org>
Acked-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
This patch makes possible for a driver to specify maximal listen interval
The possibility for user to configure listen interval is not implemented
yet, currently the maximum provided by the driver or 1 is used.
Mac80211 uses config handler to set listen interval for to the driver.
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Zhu Yi <yi.zhu@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
This patch adds the dtim_period in ieee80211_bss_conf, this allows the low
level driver to know the dtim_period, and to plan power save accordingly.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Zhu Yi <yi.zhu@intel.com>
Acked-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Avoid the overhead of atomic increment/decrement on each received packet.
This helps performance of non-NAPI devices (like loopback).
Use cleanup function to walk queue on each cpu and clean out any
left over packets.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The old code will drop IPv6 packet if ipfragok is not set, since
ipfragok is obsoleted, will be instead by used skb->local_df, so this
check must be changed to skb->local_df.
This patch fix this problem and not drop packet if skb->local_df is
set to true.
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
The ipfragok flag controls whether the packet may be fragmented
either on the local host on beyond. The latter is only valid on
IPv4.
In fact, we never want to do the latter even on IPv4 when PMTU is
enabled. This is because even though we can't fragment packets
within SCTP due to the prtocol's inherent faults, we can still
fragment it at IP layer. By setting the DF bit we will improve
the PMTU process.
RFC 2960 only says that we SHOULD clear the DF bit in this case,
so we're compliant even if we set the DF bit. In fact RFC 4960
no longer has this statement.
Once we make this change, we only need to control the local
fragmentation. There is already a bit in the skb which controls
that, local_df. So this patch sets that instead of using the
ipfragok argument.
The only complication is that there isn't a struct sock object
per transport, so for IPv4 we have to resort to changing the
pmtudisc field for every packet. This should be safe though
as the protocol is single-threaded.
Note that after this patch we can remove ipfragok from the rest
of the stack too.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
When Set Hop-by-Hop options header with NULL data
pointer and optlen is not zero use setsockopt(),
the kernel successfully return 0 instead of
return error EINVAL or EFAULT.
This patch fix the problem.
Signed-off-by: Yang Hongyang <yanghy@cn.fujitsu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
cookie_v6_check() did not call reqsk_free() if xfrm_lookup() fails,
leaking the request sock.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>