It doesn't seem to have any effect on the x86 architecture but it does
have effect on the Axis CRIS architecture.
Signed-off-by: Jesper Bengtsson <jesper.bengtsson@axis.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
When CONFIG_NET_CLS_ACT is enabled, tc_classify() is called twice in
prio_classify(). This causes "interesting" behaviour: with the setup
below, packets are duplicated, sent twice to ifb0, and then loop in and
out of ifb0.
The patch uses the previously calculated return value in the switch,
which is probably what Patrick had in mind in commit
bdba91ec70 -- maybe Patrick can
double-check this?
-- example setup --
ifconfig ifb0 up
tc qdisc add dev ifb0 root netem delay 2s
tc qdisc add dev $ETH root handle 1: prio
tc filter add dev $ETH parent 1: protocol ip prio 10 u32 \
match ip dst 172.24.110.6/32 flowid 1:1 \
action mirred egress redirect dev ifb0
ping -c1 172.24.110.6
Signed-off-by: Lucas Nussbaum <lucas.nussbaum@imag.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
acpi_get_devices() returns success if it did not find any device.
We have to check for this case.
Signed-off-by: Alexey Starikovskiy <astarikovskiy@suse.de>
Tested-by: Daniel Ritz <daniel.ritz-ml@swissonline.ch>
Tested-by: Luca <kronos.it@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Bridge code calls ethtool to get speed. The conversion to using
only ethtool_ops broke the case of devices without ethtool_ops.
This is a new regression in 2.6.23.
Rearranged the switch to a logical order, and use gcc initializer.
Ps: speed should have been part of the network device structure from
the start rather than burying it in ethtool.
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Acked-by: Matthew Wilcox <matthew@wil.cx>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch fixes some packet leakage in bridge. The bridging code was
allowing forward table entries to be generated even if a device was
being blocked. The fix is to not add forwarding database entries
unless the port is active.
The bug arose as part of the conversion to processing STP frames
through normal receive path (in 2.6.17).
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Acked-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cell phone networks do link layer retransmissions and other
things that cause unnecessary timeout retransmits. So allow
the minimum RTO to be inflated per-route to deal with this.
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexey Dobriyan reports that maxcpus=1 is still broken in 2.6.23-rc4:
if CONFIG_HOTPLUG_CPU is not set, x86_64 bootup oopses in show_stat() -
for_each_possible_cpu accesses a per-cpu area which was never set up.
Alexey identified commit 61ec7567db
(ACPI: boot correctly with "nosmp" or "maxcpus=0") as the origin;
but it's not really to blame, just exposes a bug in 2.6.23-rc1's commit
8b3b295502 (Especially when !CONFIG_HOTPLUG_CPU,
avoid needlessy allocating resources for CPUs that can never become available).
rc1's test for max_cpus < 2 in start_kernel() wasn't working because
max_cpus was still NR_CPUS at that point: until rc4 moved the maxcpus
parsing earlier. Now it sets cpu_possible_map to 1 before allocating
all possible per-cpu areas; then smp_init() expands cpu_possible_map
to cpu_present_map (0xf in my case) later on.
rc1's commit has good intentions, but expects cpu_present_map to be
limited by maxcpus, which is only the case on i386. cpus_and(possible,
possible,present) might be good, but needs an audit of cpu_present_map
uses - there may well be assumptions that any cpu present is possible.
So stay safe for now and just revert those #ifndef CONFIG_HOTPLUG_CPU
optimizations in rc1's commit.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Cc: Alexey Dobriyan <adobriyan@sw.ru>
Cc: Len Brown <lenb@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Clear parent death signal on SID transitions to prevent unauthorized
signaling between SIDs.
Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov>
Acked-by: Eric Paris <eparis@parisplace.org>
Signed-off-by: James Morris <jmorris@localhost.localdomain>
If an INIT with invalid parameter length look like this:
Parameter Type : 1
Parameter Length: 800
and not contain any payload, SCTP will ignore this parameter and send
back a INIT-ACK.
This patch is fix to handle this invalid parameter length correctly.
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com>
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Currently we abort on the INIT chunk we our backlog is currenlty
exceeded. Delay this about untill COOKIE-ECHO to give the user
time to accept the socket. Also, make sure that we treat
sk_max_backlog of 0 as no connections allowed.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
When performing a retransmit, do not include the chunk if
it was sent less then 1 rtt ago. The reason is that we
may receive the SACK very soon and wouldn't retransmit.
Suggested by Randy Stewart.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Do not set Unconfirmed transports to Inactive state. This may
result in an inactive association being destroyed since we start
counting errors on "inactive" transports against the association.
This was found at the SCTP interop event.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
sctp_bindx() allows the use of unspecified port. The problem is
that every address we bind to ends up selecting a new port if
the user specified port 0. This patch allows re-use of the
already selected port when the port from bindx was 0.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
When multi bundling SHUTDOWN-ACK message is received in ESTAB state,
this will cause "sctp protocol violation state" message print many times.
If SHUTDOWN-ACK is bundled 300 times in one packet, message will be
print 300 times. The same problem also exists when received unexpected
HEARTBEAT-ACK message which is bundled message times.
This patch used net_ratelimit() to suppress error messages print too fast.
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com>
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
PROTOCOL VIOLATION error cause in ABORT is bad encode when make abort
chunk. When SCTP encode ABORT chunk with PROTOCOL VIOLATION error cause,
it just add the error messages to PROTOCOL VIOLATION error cause, the
rest four bytes(struct sctp_paramhdr) is just add to the chunk, not
change the length of error cause. This cause the ABORT chunk to be a bad
format. The chunk is like this:
ABORT chunk
Chunk type: ABORT (6)
Chunk flags: 0x00
Chunk length: 72 (*1)
Protocol violation cause
Cause code: Protocol violation (0x000d)
Cause length: 62 (*2)
Cause information: 5468652063756D756C61746976652074736E2061636B2062...
Cause padding: 0000
[Needless] 00030010
Chunk Length(*1) = 72 but Cause length(*2) only 62, not include the
extend 4 bytes.
((72 - sizeof(chunk_hdr)) = 68) != (62 +3) / 4 * 4
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com>
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc:
[POWERPC] PS3: Fix bug where the major version part is not compared
[POWERPC] Update defconfigs
[POWERPC] spufs: Don't call spu_run_init from spu_reacquire_runnable
[POWERPC] spufs: Fix update of mailbox status register during backed wbox write
[POWERPC] spu_manage: fix spu_unit_number for celleb device tree
[POWERPC] Update defconfigs
[POWERPC] Flush registers to proper task context
Another fallout from the removal of #include <linux/fs.h> from mm.h
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If the stack pointer is 0xc057a000, then the first stack page is at
0xc0579000 (the stack pointer is decremented before use). Not
calculating this correctly caused guests with CONFIG_DEBUG_PAGEALLOC=y
to be killed with a "bad stack page" message: the initial kernel stack
was just proceeding the .smp_locks section which
CONFIG_DEBUG_PAGEALLOC marks read-only when freeing.
Thanks to Frederik Deweerdt for the bug report!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
At function sctp_addto_chunk(), it do pad before add payload to chunk if
chunk length is not 4-byte alignment. But it do pad with a bad length.
This patch fixed this probleam.
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com>
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Fix the bug that the major version part of the firmware version number
is ignored in the comparison done by ps3_compare_firmware_version
because the difference of two 64-bit quantities is returned as an int.
Signed-off-by: Masakazu Mokuno <mokuno@sm.sony.co.jp>
Acked-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
During GPIO testing on PiMX1 board there has been revealed
problem with some pins input functions. The GIUS bit has
to be set for inputs to work reliably too. It is surprising
that input worked on some inputs with incorrect setup before.
DR is not mandatory, but it ensures stable constant level
on internal traces.
Signed-off-by: Pavel Pisa <pisa@cmp.felk.cvut.cz>
Acked-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This fixes a major bug which was happening when a SPU thread advances
its execution right after being restored to a SPU. A potentially
outdated NPC value was being (re)written to the SPU.
So, spu_run_init, in this case, was either not doing anything relevant,
or breaking the execution of the SPU thread.
This fixes a common problem of losing a mailbox write when it was done
to a saved context.
Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
When a process writes into the inbound spu mailbox (wbox) while the
context is saved, we accidentally break the contents of the mb_stat_R
register by clearing other entries of the mailbox status register. This
can cause the user side to hang.
This change fixes the problem by only altering the appropriate bits
of the mailbox status register during a backing-store write.
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The dummy touchkit_ps2_detect() for the CONFIG_MOUSE_PS2_TOUCHKIT=n case
shouldn't be a global function.
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
We should not return IRQ_HANDLED if we didn't handle the interrupt.
Signed-off-by: Fernando Luis Vazquez Cao <fernando@oss.ntt.co.jp>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
This fixes a regression introduced with 2.6.23-rc4 after on some
confusion about the device tree interfaces.
IBM QS21 device trees provide "physical-id", so we changed the code to
run on that and remain compatible with all IBM machines.
However, the Toshiba Celleb device tree provides the "unit-id" property,
which was in the Linux code, but never used in this way on IBM hardware.
Legacy device tree used the reg property for the physical id of an spe.
This patch fixes find_spu_unit_number to look for the spu id in that order.
The length is checked to avoid misinterpretation in case the attributes
unit-id or reg do not contain the id.
Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Cc: Jeremy Kerr <jk@ozlabs.org>
Currently we only assign the sequence number to a packet that
we are about to transmit. This however breaks the Partial
Reliability extensions, because it's possible for us to
never transmit a packet, i.e. it expires before we get to send
it. In such cases, if the message contained multiple SCTP
fragments, and we did manage to send the first part of the
message, the Stream sequence numbers would get into invalid
state and cause receiver to stall.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
When we recieve a FWD-TSN (meaning the peer has abandoned the data),
we need to clean up any partially received messages that may be
hanging out on the re-assembly or re-ordering queues. This is
a MUST requirement that was not properly done before.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com.>
When we flush register state for FP, Altivec, or SPE in flush_*_to_thread
we need to respect the task_struct that the caller has passed to us.
Most cases we are called with current, however sometimes (ptrace) we may
be passed a different task_struct.
This showed up when using gdbserver debugging a simple program that used
floating point. When gdb tried to show the FP regs they all showed up as
0, because the child's FP registers were never properly flushed to memory.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
The pending interrupts can be remaining at boot up time on some
platform. This will cause spurious interrupts when interrupt is
enabled for the first time. This patch clears IVR at the CPU
initialization to eliminate such spurious interrupts.
Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Fix handling for spurious interrupts not being mapped to any IRQs.
Currently, spurious interrupts that are not mapped to any IRQs are
handled as IRQ 15 (== IA64_SPURIOUS_VECTOR). But it is not proper
because vector != irq. We need special handlings for such spurious
interrupts not being mapped to any IRQs.
Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Initially pkt_dev can be NULL this causes netif_subqueue_stopped to
oops. The patch below should cure it. But maybe the pktgen TX logic
should be reworked to better support the new multiqueue support.
Signed-off-by: Robert Olsson <robert.olsson@its.uu.se>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add comment to explain why we cannot read back after chip reset
before delaying.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As pointed out by Jrgen, we are overflowing the number of GPIOs
in pxa_init_irq_gpio(). I'm seeing the same problem on my HTC
Universal PXA270 based PDA.
According to Eric, the function argument is the number of GPIOs,
so we should keep the semantics and reduce the number of
iteration by 1.
Signed-off-by: Samuel Ortiz <sameo@openedhand.com>
Acked-by: Jrgen Schindele <linux@schindele.name>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
bnx2.c (incorrectly) sets current->state directly to
TASK_UNINTERRUPTIBLE, without going through set_task_state(). However
all the code wants to do is an msleep so just make it do that instead...
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
cleanup: we have the 'se' and 'curr' entity-pointers already,
no need to use p->se and current->se.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Mike Galbraith <efault@gmx.de>
small schedstat fix: the cfs_rq->wait_runtime 'sum of all runtimes'
statistics counters missed newly forked tasks and thus had a constant
negative skew. Fix this.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Peter Zijlstra noticed the following bug in SCHED_FEAT_SKIP_INITIAL (which
is disabled by default at the moment): it relies on se.wait_start_fair
being 0 while update_stats_wait_end() did not recognize a 0 value,
so instead of 'skipping' the initial interval we gave the new child
a maximum boost of +runtime-limit ...
(No impact on the default kernel, but nice to fix for completeness.)
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Mike Galbraith <efault@gmx.de>
update the fair-clock before using it for the key value.
[ mingo@elte.hu: small cleanups. ]
Signed-off-by: Ting Yang <tingy@cs.umass.edu>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
de-HZ-ification of the granularity defaults unearthed a pre-existing
property of CFS: while it correctly converges to the granularity goal,
it does not prevent run-time fluctuations in the range of
[-gran ... 0 ... +gran].
With the increase of the granularity due to the removal of HZ
dependencies, this becomes visible in chew-max output (with 5 tasks
running):
out: 28 . 27. 32 | flu: 0 . 0 | ran: 9 . 13 | per: 37 . 40
out: 27 . 27. 32 | flu: 0 . 0 | ran: 17 . 13 | per: 44 . 40
out: 27 . 27. 32 | flu: 0 . 0 | ran: 9 . 13 | per: 36 . 40
out: 29 . 27. 32 | flu: 2 . 0 | ran: 17 . 13 | per: 46 . 40
out: 28 . 27. 32 | flu: 0 . 0 | ran: 9 . 13 | per: 37 . 40
out: 29 . 27. 32 | flu: 0 . 0 | ran: 18 . 13 | per: 47 . 40
out: 28 . 27. 32 | flu: 0 . 0 | ran: 9 . 13 | per: 37 . 40
average slice is the ideal 13 msecs and the period is picture-perfect 40
msecs. But the 'ran' field fluctuates around 13.33 msecs and there's no
mechanism in CFS to keep that from happening: it's a perfectly valid
solution that CFS finds.
to fix this we add a granularity/preemption rule that knows about
the "target latency", which makes tasks that run longer than the ideal
latency run a bit less. The simplest approach is to simply decrease the
preemption granularity when a task overruns its ideal latency. For this
we have to track how much the task executed since its last preemption.
( this adds a new field to task_struct, but we can eliminate that
overhead in 2.6.24 by putting all the scheduler timestamps into an
anonymous union. )
with this change in place, chew-max output is fluctuation-less all
around:
out: 28 . 27. 39 | flu: 0 . 2 | ran: 13 . 13 | per: 41 . 40
out: 28 . 27. 39 | flu: 0 . 2 | ran: 13 . 13 | per: 41 . 40
out: 28 . 27. 39 | flu: 0 . 2 | ran: 13 . 13 | per: 41 . 40
out: 28 . 27. 39 | flu: 0 . 2 | ran: 13 . 13 | per: 41 . 40
out: 28 . 27. 39 | flu: 0 . 1 | ran: 13 . 13 | per: 41 . 40
out: 28 . 27. 39 | flu: 0 . 1 | ran: 13 . 13 | per: 41 . 40
this patch has no impact on any fastpath or on any globally observable
scheduling property. (unless you have sharp enough eyes to see
millisecond-level ruckles in glxgears smoothness :-)
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Mike Galbraith <efault@gmx.de>
There is an Amarok song switch time increase (regression) under
hefty load.
What is happening is that sleeper_bonus is never consumed, and only
rarely goes below runtime_limit, so for the most part, Amarok isn't
getting any bonus at all. We're keeping sleeper_bonus right at
runtime_limit (sched_latency == sched_runtime_limit == 40ms) forever, ie
we don't consume if we're lower that that, and don't add if we're above
it. One Amarok thread waking (or anybody else) will push us past the
threshold, so the next thread waking gets nada, but will reap pain from
the previous thread waking until we drop back to runtime_limit. It
looks to me like under load, some random task gets a bonus, and
everybody else pays, whether deserving or not.
This diff fixed the regression for me at any load rate.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>