It's been a useless no-op for long enough in 2.6 so I figured it's time to
remove it. The number of people that could object because they're
maintaining unified 2.4 and 2.6 drivers is probably rather small.
[ Handled drivers added by netdev tree and some missed IRDA cases... -DaveM ]
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Several devices have multiple independant RX queues per net
device, and some have a single interrupt doorbell for several
queues.
In either case, it's easier to support layouts like that if the
structure representing the poll is independant from the net
device itself.
The signature of the ->poll() call back goes from:
int foo_poll(struct net_device *dev, int *budget)
to
int foo_poll(struct napi_struct *napi, int budget)
The caller is returned the number of RX packets processed (or
the number of "NAPI credits" consumed if you want to get
abstract). The callee no longer messes around bumping
dev->quota, *budget, etc. because that is all handled in the
caller upon return.
The napi_struct is to be embedded in the device driver private data
structures.
Furthermore, it is the driver's responsibility to disable all NAPI
instances in it's ->stop() device close handler. Since the
napi_struct is privatized into the driver's private data structures,
only the driver knows how to get at all of the napi_struct instances
it may have per-device.
With lots of help and suggestions from Rusty Russell, Roland Dreier,
Michael Chan, Jeff Garzik, and Jamal Hadi Salim.
Bug fixes from Thomas Graf, Roland Dreier, Peter Zijlstra,
Joseph Fannin, Scott Wood, Hans J. Koch, and Michael Chan.
[ Ported to current tree and all drivers converted. Integrated
Stephen's follow-on kerneldoc additions, and restored poll_list
handling to the old style to fix mutual exclusion issues. -DaveM ]
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
This blade-specific board form factor is identical to the 82571EB
board.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
This patch adds support for 2 new board variants:
- A Quad port fiber 82571 board
- A blade version of the 82571 quad copper board
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
All drivers implement ethtool get_perm_addr the same way -- by calling
the generic function. So we can inline the generic function into the
caller and avoid going through the drivers.
Signed-off-by: Matthew Wilcox <matthew@wil.cx>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of all drivers reading pci config space to get the revision
ID, they can now use the pci_device->revision member.
This exposes some issues where drivers where reading a word or a dword
for the revision number, and adding useless error-handling around the
read. Some drivers even just read it for no purpose of all.
In devices where the revision ID is being copied over and used in what
appears to be the equivalent of hotpath, I have left the copy code
and the cached copy as not to influence the driver's performance.
Compile tested with make all{yes,mod}config on x86_64 and i386.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Acked-by: Dave Jones <davej@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
To assure the symmetry of poll enable/disable in up/down, we should
initialize the netdevice to be poll_disabled at load time. Doing
this after register_netdevice leaves us open to another race, so
lets move all the netif_* calls above register_netdevice so the
stack starts out how we expect it to be.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Doug Chapman <doug.chapman@hp.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
This restores the previously removed netif_poll_enable call in e1000_open.
It's needed on all but the first call to e1000_open for a NIC as
e1000_close always calls netif_poll_disable.
netif_poll_enable can only be called safely if no polls have been
scheduled. This should be the case as long as we don't enter our IRQ
handler.
In order to guarantee this we explicitly disable IRQs as early as possible
when we're probing the NIC.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "Kok, Auke" <auke-jan.h.kok@intel.com>
Cc: Jeff Garzik <jeff@garzik.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Herbert Xu wrote:
"netif_poll_enable can only be called if you've previously called
netif_poll_disable. Otherwise a poll might already be in action
and you may get a crash like this."
Removing the call to netif_poll_enable in e1000_open should fix this issue,
the only other call to netif_poll_enable is in e1000_up() which is only
reached after a device reset or resume.
Bugzilla: http://bugzilla.kernel.org/show_bug.cgi?id=8455https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=240339
Tested by Doug Chapman <doug.chapman@hp.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
pci_enable_msi failure is a normal event so we should not print any error.
Going over the code I spotted a missing pci_disable_msi() leak when irq
allocation fails. The whole code also needed a cleanup, so I combined the
two different calls to pci_request_irq into a single call making this
look a lot better. All #ifdef CONFIG_PCI_MSI's have been removed.
Compile tested with both CONFIG_PCI_MSI enabled and disabled.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
flush_work(wq, work) doesn't need the first parameter, we can use cwq->wq
(this was possible from the very beginnig, I missed this). So we can unify
flush_work_keventd and flush_work.
Also, rename flush_work() to cancel_work_sync() and fix all callers.
Perhaps this is not the best name, but "flush_work" is really bad.
(akpm: this is why the earlier patches bypassed maintainers)
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Jeff Garzik <jeff@garzik.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Tejun Heo <htejun@gmail.com>
Cc: Auke Kok <auke-jan.h.kok@intel.com>,
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Switch e1000 over to flush_work_keventd(). This probably fixes a netdev-close
versus linkwatch rtnl_lock() deadlock which nobody knew about.
(akpm: bypassed maintainers, sorry. There are other patches which depend on
this)
Cc: "Maciej W. Rozycki" <macro@linux-mips.org>
Cc: David Howells <dhowells@redhat.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jeff Garzik <jeff@garzik.org>
Acked-by: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Oleg Nesterov <oleg@tv-sign.ru>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
A patch to use ARRAY_SIZE macro already defined in kernel.h.
Signed-off-by: Ahmed S. Darwish <darwish.07@gmail.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Use the round_jiffies() function in e1000.
These timers all were of the "about once a second" or "about once every X
seconds" variety and several showed up in the "what wakes the cpu up" profiles
that the tickless patches provide. Some timers are highly dynamic based on
network load; but even on low activity systems they still show up so the
rounding is done only in cases of low activity, allowing higher frequency
timers in the high activity case.
The various hardware watchdogs are an obvious case; they run every 2 seconds
but aren't otherwise specific of exactly when they need to run.
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
* 'e1000-fixes' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6:
e1000: FIX: Stop raw interrupts disabled nag from RT
e1000: FIX: firmware handover bits
e1000: FIX: be ready for incoming irq at pci_request_irq
Current e1000_xmit_frame spews raw interrupt disabled nag messages when
used with RT kernel patches. This patch uses spin_trylock_irqsave,
which allows RT patches to properly manage the irq semantics.
Signed-off-by: Mark Huth <mhuth@mvista.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Upon code inspection it was spotted that the firmware handover bit get/set
mismatched, which may have resulted in management issues on PCI-E
adapters. Setting them correctly may fix some management issues such
as arp routing etc.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
DEBUG_SHIRQ code exposed that e1000 was not ready for incoming interrupts
after having called pci_request_irq. This obviously requires us to finish
our software setup which assigns the irq handler before we request the
irq.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
When csum_offset was introduced we did a conversion from csum to
csum_offset where applicable. A couple of drivers were missed in
this process.
It was harmless to begin with since the two fields coincided. Now
that we've made them different with the addition of csum_start, the
missed drivers must be converted or they can't send packets out at
all that require checksum offload.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
To clearly state the intent of copying to linear sk_buffs, _offset being a
overly long variant but interesting for the sake of saving some bytes.
Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
So that it is also an offset from skb->head, reduces its size from 8 to 4 bytes
on 64bit architectures, allowing us to combine the 4 bytes hole left by the
layer headers conversion, reducing struct sk_buff size to 256 bytes, i.e. 4
64byte cachelines, and since the sk_buff slab cache is SLAB_HWCACHE_ALIGN...
:-)
Many calculations that previously required that skb->{transport,network,
mac}_header be first converted to a pointer now can be done directly, being
meaningful as offsets or pointers.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The ip_hdrlen() buddy, created to reduce the number of skb->h.th-> uses and to
avoid the longer, open coded equivalent.
Ditched a no-op in bnx2 in the process.
I wonder if we should have a BUG_ON(skb->h.th->doff < 5) in tcp_optlen()...
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
For the quite common 'skb->h.raw - skb->data' sequence.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now the skb->nh union has just one member, .raw, i.e. it is just like the
skb->mac union, strange, no? I'm just leaving it like that till the transport
layer is done with, when we'll rename skb->mac.raw to skb->mac_header (or
->mac_header_offset?), ditto for ->{h,nh}.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
For the quite common 'skb->nh.raw - skb->data' sequence.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This reverts commit 60cba200f1. It's been
linked to lockups of the e1000 hardware, see for example
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=229603
but it's likely that the commit itself is not really introducing the
bug, but just allowing an unrelated problem to rear its ugly head (ie
one current working theory is that the code exposes us to a hardware
race condition by decreasing the amount of time we spend in each NAPI
poll cycle).
We'll revert it until root cause is known. Intel has a repeatable
reproduction on two different machines and bus traces of the hardware
doing something bad.
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Jeff Garzik <jeff@garzik.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg KH <gregkh@suse.de>
Cc: Dave Jones <davej@redhat.com>
Cc: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This patch splits the vlan_group struct into a multi-allocated struct. On
x86_64, the size of the original struct is a little more than 32KB, causing
a 4-order allocation, which is prune to problems caused by buddy-system
external fragmentation conditions.
I couldn't just use vmalloc() because vfree() cannot be called in the
softirq context of the RCU callback.
Signed-off-by: Dan Aloni <da-x@monatomic.org>
Acked-by: Jeff Garzik <jeff@garzik.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
This reverts commit d2ed16356f.
As Thomas Gleixner reports:
"e1000 is not working anymore. ifup fails permanentely.
ADDRCONF(NETDEV_UP): eth0: link is not ready
nothing else"
The broken commit was identified with "git bisect".
Auke Kok says:
"I think we need to drop this now. The report that says that this
*fixes* something might have been on regular interrupts only. I
currently suspect that it breaks all MSI interrupts, which would make
sense if I look a the code. Very bad indeed."
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Acked-by: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial: (25 commits)
Documentation/kernel-docs.txt update.
arch/cris: typo in KERN_INFO
Storage class should be before const qualifier
kernel/printk.c: comment fix
update I/O sched Kconfig help texts - CFQ is now default, not AS.
Remove duplicate listing of Cris arch from README
kbuild: more doc. cleanups
doc: make doc. for maxcpus= more visible
drivers/net/eexpress.c: remove duplicate comment
add a help text for BLK_DEV_GENERIC
correct a dead URL in the IP_MULTICAST help text
fix the BAYCOM_SER_HDX help text
fix SCSI_SCAN_ASYNC help text
trivial documentation patch for platform.txt
Fix typos concerning hierarchy
Fix comment typo "spin_lock_irqrestore".
Fix misspellings of "agressive".
drivers/scsi/a100u2w.c: trivial typo patch
Correct trivial typo in log2.h.
Remove useless FIND_FIRST_BIT() macro from cardbus.c.
...
By reading the MAC status register we can detect whether the MAC has
seen the PHY see link. This allows us to show the link properties before
the device is up in ethtool.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Now that 2.6.19 provides a proper implementation that saves MSI, PCI-E
config space, we can have the e1000 driver use those instead of it's
custom implementation.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Correct mis-spellings of "algorithm", "appear", "consistent" and
(shame, shame) "kernel".
Signed-off-by: Robert P. J. Day <rpjday@mindspring.com>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Remove the NETIF_F_TSO #ifdef-ery in drivers/net; this was
for old-old-2.4 compat (even current 2.4 has NETIF_F_TSO)
but it's time to get rid of it by now.
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
The driver was still mis-calculating the number of bytes sent during
transmit, now the driver computes what appears to be exactly 100%
correct byte counts (not including CRC) when figuring out how many
bytes and frames were sent during the current transmit packet.
Since the driver sets the IP checksum insertion bit (IXSM in Status
field) in transmit context descriptors, it should clear the IP checksum
bits of any garbage so as not to confuse the hardware.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Print RX/TX flow control setting at link up time to display the
actual link FC properties instead of the advertised values.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
This fix attempts to solve a customer (IBM) reported issue with NAPI
enabled e1000 having bad performance when transmitting simultaneously
on four ports. The issue comes down to an interaction between NAPI,
hardware interrupt balancing, and the driver rescheduling poll on
the same processor. Try to fix by allowing the driver to re-enable
interrupts sooner instead of polling one more time, when there was
recently all the work completed in cleanup.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Unfortunately the read-free MSI interrupt handler needs to flush write
the icr register and thus we can't be read-free. Our MSI irq routine
thus becomes a lot more simpler since we don't need to track link state
anymore.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
This reverts commit 72f3ab7462, which was
superceded by commit 683a2aa339
("e1000: Do not truncate TSO TCP header with 82544 workaround"), which
fixed the real problem.
Signed-off-by: Jeff Garzik <jeff@garzik.org>
The e1000 driver has a workaround for 82544 on PCI-X where if the
terminating byte of a buffer is at addresses 0-3 mod 8, then 4 bytes
are shaved off it and defered to a new segment. This is due to an
erratum that could otherwise cause TX hangs.
Unfortunately this breaks TSO because it may cause the TCP header to
be split over two segments which itself causes TX hangs. The solution
is to pull 4 bytes of data up from the next segment rather than pushing
4 bytes off. This ensures the TCP header remains in one piece and
works around the PCI-X hang.
This patch is based on one from Jesse Brandeburg.
This bug has been trigered by both CONFIG_DEBUG_SLAB as well as Xen.
Note that the only reason we don't see this normally is because the
TCP stack starts writing from the end, i.e., it writes the TCP header
first then slaps on the IP header, etc. So the end of the TCP header
(skb->tail - 1 here) is always aligned correctly.
Had we made the start of the IP header (e.g., IPv6) 8-byte aligned
instead, this would happen for normal TCP traffic as well.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Currently after an interface up, the link state is detected 2 seconds later
when the first watchdog timer runs. This patch changes that by triggering
the hardware to generate a link-change interrupt from the up() function
instead. This has the result that the link state gets detected immediately
and without races. This has the potential to speed up booting since a normal
distribution boot process waits for a link before DHCP is attempted.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Add 3 extra packet redirect counters for tracking purposes to make sure
we can test that all packets arrive properly.
Originally from Jesse Brandeburg <jesse.brandeburg@intel.com>,
rewritten to use feature flags by me.
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Allow the user to vary the size that copybreak works. Currently cb is enabled
for packets < 256 bytes, but various tests indicate that this should be
configurable for specific use cases. In addition, this parameter allows us
to force never/always during testing to get full and predictable coverage of
both code paths.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Assign the PBA to be large enough to contain at least 2 jumbo frames on
all adapters. This dramatically increases performance on several adapters
and fixes TX performance degradation issues where the PBA was misallocated
in the old algorithm.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
the driver has (ancient) code for messing with TIPG from the 82542 days.
Unfortunately this code was running on our current adapters and setting
TIPG for fiber to be +1 over the copper value. This caused 1.45Mpps
to be sent instead of 1.487Mpps.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
For older adapters we know that they are of the PCI bus type, so we can
just set this.
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
This bugfix makes sure that the driver data reflects the full new situation
before the adapter is reinitialized.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
In rare occasions, ESB2 systems would end up started without the RX
unit being turned on. Add a check that runs post-init to work around
this issue.
Originally from Jesse Brandeburg <jesse.brandeburg@intel.com>,
rewritten to use feature flags by me.
Signed-off-by: Jeff Garzik <jeff@garzik.org>
CONFIG_DEBUG_SLAB changes alignments of the data structures the slab
allocators return. These break certain workarounds for TSO on the 82544.
Since DEBUG_SLAB is relatively rare and not used for performance sensitive
cases, the simplest fix is to disable TSO in this special situation.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
If the user has forced gigabit speed, phy power management must be disabled;
otherwise the NIC would try to negotiate to a linkspeed of 10/100 mbit on
shutdown, which would lead to a total loss of link. This loss of link breaks
Wake-on-Lan and IPMI.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Several bugs existed in how we handle manageability issues all
over the driver. This patch consolidates all the managability
release and init code in two single functions and call them from
appropriate locations. This fixes several BMC packet redirect issues
and powerup/down hiccups.
Originally from Jesse Brandeburg <jesse.brandeburg@intel.com>, rewritten
to use feature flags by me.
Signed-off-by: Jeff Garzik <jeff@garzik.org>
The 82543 chip does not count tx_carrier_errors properly in FD mode;
report zeros instead of garbage.
Originally from Jesse Brandeburg <jesse.brandeburg@intel.com>, rewritten
to use feature flags by me.
Signed-off-by: Jeff Garzik <jeff@garzik.org>
The dynamic interrupt rate control patches omitted proper counting
for jumbo's and TSO resulting in suboptimal interrupt mitigation strategies.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
The lower 2 bits of a user-supplied itr setting (via ethtool) need to be
masked off: These lower two bits are used as control bits.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Conflicts:
drivers/infiniband/core/iwcm.c
drivers/net/chelsio/cxgb2.c
drivers/net/wireless/bcm43xx/bcm43xx_main.c
drivers/net/wireless/prism54/islpci_eth.c
drivers/usb/core/hub.h
drivers/usb/input/hid-core.c
net/core/netpoll.c
Fix up merge failures with Linus's head and fix new compilation failures.
Signed-Off-By: David Howells <dhowells@redhat.com>
... into anonymous union of __wsum and __u32 (csum and csum_offset resp.)
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
ia64:
drivers/built-in.o(.text+0xd9a72): In function `e1000_xmit_frame':
: undefined reference to `csum_ipv6_magic'
Cc: Auke Kok <auke-jan.h.kok@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Add a new dynamic itr algorithm, with 2 modes, and make it the default
operation mode. This greatly reduces latency and increases small packet
performance, at the "cost" of some CPU utilization. Bulk traffic
throughput is unaffected.
The driver can limit the amount of interrupts per second that the
adapter will generate for incoming packets. It does this by writing a
value to the adapter that is based on the maximum amount of interrupts
that the adapter will generate per second.
Setting InterruptThrottleRate to a value greater or equal to 100 will
program the adapter to send out a maximum of that many interrupts per
second, even if more packets have come in. This reduces interrupt
load on the system and can lower CPU utilization under heavy load,
but will increase latency as packets are not processed as quickly.
The default behaviour of the driver previously assumed a static
InterruptThrottleRate value of 8000, providing a good fallback value
for all traffic types,but lacking in small packet performance and
latency. The hardware can handle many more small packets per second
however, and for this reason an adaptive interrupt moderation algorithm
was implemented.
Since 7.3.x, the driver has two adaptive modes (setting 1 or 3) in
which it dynamically adjusts the InterruptThrottleRate value based on
the traffic that it receives. After determining the type of incoming
traffic in the last timeframe, it will adjust the InterruptThrottleRate
to an appropriate value for that traffic.
The algorithm classifies the incoming traffic every interval into
classes. Once the class is determined, the InterruptThrottleRate
value is adjusted to suit that traffic type the best. There are
three classes defined: "Bulk traffic", for large amounts of packets
of normal size; "Low latency", for small amounts of traffic and/or
a significant percentage of small packets; and "Lowest latency",
for almost completely small packets or minimal traffic.
In dynamic conservative mode, the InterruptThrottleRate value is
set to 4000 for traffic that falls in class "Bulk traffic". If
traffic falls in the "Low latency" or "Lowest latency" class, the
InterruptThrottleRate is increased stepwise to 20000. This default
mode is suitable for most applications.
For situations where low latency is vital such as cluster or
grid computing, the algorithm can reduce latency even more when
InterruptThrottleRate is set to mode 1. In this mode, which operates
the same as mode 3, the InterruptThrottleRate will be increased
stepwise to 70000 for traffic in class "Lowest latency".
Setting InterruptThrottleRate to 0 turns off any interrupt moderation
and may improve small packet latency, but is generally not suitable
for bulk throughput traffic.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Rick Jones <rick.jones2@hp.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Add a generic MSI interrupt routine that is IO read-free, speeding up
MSI interrupt handling.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
This file needs some cleanups and reordering - logically order it
so that relevant defines and code are together with properly quoted
defaults.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Spec fix: don't set IDE unless we are actually setting the tx
int delay time.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
ICH8 will soon be followed by newer chipsets bearing the same acronym,
thus we remove the '8' and make it independent of the version number in
the platform name.
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Add a netif_wake/start_queue counter to the ethtool statistics to indicated
to the user that their transmit ring could be too small for their workload.
Signed-off-by: Jesse brandeburg <jesse.brandeburg@intel.com>
Cc: Jamal Hadi <hadi@cyberus.ca>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Add support for a Low Profile quad-port PCI-E adapter and 2 variants
of the ICH8 systems' onboard NIC's.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
This memsetting was added in a paranoid rage debugging TX hangs, but
are no longer of importance. We can beef up the performance quite a
bit removing them. Make sure to fill in next_to_watch to allow this.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Order pci-e capability struct according to bus/pci bus width ordering
preserving the hard pci spec numbers.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
On ich systems during PHY power down to D3, the voltage regulators
were left on.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
IA64 SMP systems were seeing TX issues with multiple cpu's attempting
to write tail registers unordered. This mmiowb() fixes the issue.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Enable early receives on 82573 for jumbo frame performance. Jumbo's
are only supported on 82573L with ASPM disabled.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Enable TSO for IPV6. All e1000 hardware supports it. This reduces CPU
utilizations by 50% when transmitting IPv6 frames.
Fix symbol naming enabling ipv6 TSO. Turn off TSO6 for 10/100.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Remove debugging code disabling MULR (multiple reads). It's not usable
for a wide audience and there are no known problems with MULR right
now.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Fix various .c/.h typos in comments (no code changes).
Signed-off-by: Matt LaPlante <kernel1@cyberdogtech.com>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Pass the work_struct pointer to the work function rather than context data.
The work function can use container_of() to work out the data.
For the cases where the container of the work_struct may go away the moment the
pending bit is cleared, it is made possible to defer the release of the
structure by deferring the clearing of the pending bit.
To make this work, an extra flag is introduced into the management side of the
work_struct. This governs auto-release of the structure upon execution.
Ordinarily, the work queue executor would release the work_struct for further
scheduling or deallocation by clearing the pending bit prior to jumping to the
work function. This means that, unless the driver makes some guarantee itself
that the work_struct won't go away, the work function may not access anything
else in the work_struct or its container lest they be deallocated.. This is a
problem if the auxiliary data is taken away (as done by the last patch).
However, if the pending bit is *not* cleared before jumping to the work
function, then the work function *may* access the work_struct and its container
with no problems. But then the work function must itself release the
work_struct by calling work_release().
In most cases, automatic release is fine, so this is the default. Special
initiators exist for the non-auto-release case (ending in _NAR).
Signed-Off-By: David Howells <dhowells@redhat.com>
e1000: Fix suspend/resume powerup and irq allocation
From: Auke Kok <auke-jan.h.kok@intel.com>
After 7.0.33/2.6.16, e1000 suspend/resume left the user with an enabled
device showing garbled statistics and undetermined irq allocation state,
where `ifconfig eth0 down` would display `trying to free already freed irq`.
Explicitly free and allocate irq as well as powerup the PHY during resume
fixes when needed.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Move the length (rx_bytes counter) adjustment of 4 bytes down to after the
TBI_ACCEPT workaround.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
82571 and newer chispets don't need to limit desc. length to 4kb and can
handle 8kb sizes.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Allocations using alloc_page are taking too long for normal MTU, so
use LPE only for jumbo frames.
Signed-off-bu: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Threshold bitmasks for prefetch, host and writeback were clearing
bits that they were not supposed to. The leftmost 2 bits in the byte
for each threshold are reserved.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
The MANC register should not be read for PCI-E adapters at all, as well as
82543 and older where 82543 would master abort when this register was
accessed.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
During the handling of the PCI error recovery sequence, the current e1000
driver erroneously blocks a device reset for any but the first PCI
function. It shouldn't -- this is a cut-n-paste error from a different
driver (which tolerated only one hardware reset per hardware card).
Signed-off-by: Linas Vepstas <linas@austin.ibm.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Acked-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Maintain a per-CPU global "struct pt_regs *" variable which can be used instead
of passing regs around manually through all ~1800 interrupt handlers in the
Linux kernel.
The regs pointer is used in few places, but it potentially costs both stack
space and code to pass it around. On the FRV arch, removing the regs parameter
from all the genirq function results in a 20% speed up of the IRQ exit path
(ie: from leaving timer_interrupt() to leaving do_IRQ()).
Where appropriate, an arch may override the generic storage facility and do
something different with the variable. On FRV, for instance, the address is
maintained in GR28 at all times inside the kernel as part of general exception
handling.
Having looked over the code, it appears that the parameter may be handed down
through up to twenty or so layers of functions. Consider a USB character
device attached to a USB hub, attached to a USB controller that posts its
interrupts through a cascaded auxiliary interrupt controller. A character
device driver may want to pass regs to the sysrq handler through the input
layer which adds another few layers of parameter passing.
I've build this code with allyesconfig for x86_64 and i386. I've runtested the
main part of the code on FRV and i386, though I can't test most of the drivers.
I've also done partial conversion for powerpc and MIPS - these at least compile
with minimal configurations.
This will affect all archs. Mostly the changes should be relatively easy.
Take do_IRQ(), store the regs pointer at the beginning, saving the old one:
struct pt_regs *old_regs = set_irq_regs(regs);
And put the old one back at the end:
set_irq_regs(old_regs);
Don't pass regs through to generic_handle_irq() or __do_IRQ().
In timer_interrupt(), this sort of change will be necessary:
- update_process_times(user_mode(regs));
- profile_tick(CPU_PROFILING, regs);
+ update_process_times(user_mode(get_irq_regs()));
+ profile_tick(CPU_PROFILING);
I'd like to move update_process_times()'s use of get_irq_regs() into itself,
except that i386, alone of the archs, uses something other than user_mode().
Some notes on the interrupt handling in the drivers:
(*) input_dev() is now gone entirely. The regs pointer is no longer stored in
the input_dev struct.
(*) finish_unlinks() in drivers/usb/host/ohci-q.c needs checking. It does
something different depending on whether it's been supplied with a regs
pointer or not.
(*) Various IRQ handler function pointers have been moved to type
irq_handler_t.
Signed-Off-By: David Howells <dhowells@redhat.com>
(cherry picked from 1b16e7ac850969f38b375e511e3fa2f474a33867 commit)
Memory allocated for new tx_ring and rx_ring leaks if
e1000_setup_XX_resources() fails.c Also this patch reduces stack usage
(removed tx_new and rx_new) and uses kzalloc instead kmalloc+memset(0)
Signed-off-by: Vasily Averin <vvs@sw.ru>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>