Both the old e1000 driver and the new e1000e driver can drive some
PCI-Express e1000 cards, and we should avoid ambiguity about which
driver will pick up the support for those cards when both drivers are
enabled.
This solves the problem by having the old driver support those cards if
the new driver isn't configured, but otherwise ceding support for PCI
Express versions of the e1000 chipset to the newer driver. Thus
allowing both legacy configurations where only the old driver is active
(and handles all chips it knows about) and the new configuration with
the new driver handling the more modern PCIE variants.
Acked-by: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The new e1000e driver is apparently not yet suitable for general use, so
mark it experimental, and re-instate all the PCI-Express device IDs in
the old and stable e1000 driver so that people (namely me) can continue
to use a driver that actually works.
Auke & co have been appraised of the situation.
Cc: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Jeff Garzik <jeff@garzik.org>
Cc: David Miller <davem@davemloft.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
To help supporting users with a bad eeprom checksum, dump the
eeprom info when such a situation is encountered by a user.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Add support for configuring secondary unicast addresses. Unicast
addresses take precendece over multicast addresses when filling
the exact address filters to avoid going to promiscous mode.
When more unicast addresses are present than filter slots,
unicast filtering is disabled and all slots can be used for
multicast addresses.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
formerly e1000/e1000e only updated traffic counters once every
2 seconds with the register values of bytes/packets. With newer
code however in the interrupt and polling code we can real-time
fill in these values in the netstats struct for users to see.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
e1000e will from now on support the PCI-Express adapters that
previously were supported by e1000. This support means better
performance and easier debugging from now on for both the old
PCI-X/PCI hardware and PCI-Express adapters.
This patch also moves 3 recently merged device IDs over to e1000e
that are identical to quad-port versions of already existing
dual port versions. With this last bit every former e1000 pci-e
device should work now with e1000e.
Here is a brief list of which gigabit driver to use with which
adapter:
e1000:
82540 -> 82547
e1000e:
82571 -> 82573
ich8, ich9 (82562 or 82566)
es2lan (80003eslan)
igb: (not yet merged, only available from e1000.sf.net)
82575
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Several of the Intel ethernet drivers keep an atomic counter used to
manage when to actually hit the hardware with a disable or an enable.
The way the net_rx_work() breakout logic works during a pending
napi_disable() is that it simply unschedules the poll even if it
still has work.
This can potentially leave interrupts disabled, but that is OK
because all of the drivers are about to disable interrupts
anyways in all such code paths that do a napi_disable().
Unfortunately, this trips up the semaphore used here in the Intel
drivers. If you hit this case, when you try to bring the interface
back up it won't enable interrupts. A reload of the driver module
fixes it of course.
So what we do is make sure all the sequences now go:
napi_disable();
atomic_set(&adapter->irq_sem, 0);
*_irq_disable();
which makes sure the counter is always in the correct state.
Reported by Robert Olsson.
Signed-off-by: David S. Miller <davem@davemloft.net>
This fixes a regression added by changeset
53e52c729c ("[NET]: Make ->poll()
breakout consistent in Intel ethernet drivers.")
As pointed out by Jesse Brandeburg, for three of the drivers edited
above there is breakout logic in the *_clean_tx_irq() code to prevent
running TX reclaim forever. If this occurs, we have to elide NAPI
poll completion or else those TX events will never be serviced.
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
This makes the ->poll() routines of the E100, E1000, E1000E, IXGB, and
IXGBE drivers complete ->poll() consistently.
Now they will all break out when the amount of RX work done is less
than 'budget'.
At a later time, we may want put back code to include the TX work as
well (as at least one other NAPI driver does, but by in large NAPI
drivers do not do this). But if so, it should be done consistently
across the board to all of these drivers.
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Auke Kok <auke-jan.h.kok@intel.com>
Drivers do this to try to break out of the ->poll()'ing loop
when the device is being brought administratively down.
Now that we have a napi_disable() "pending" state we are going
to solve that problem generically.
Signed-off-by: David S. Miller <davem@davemloft.net>
Don't exit polling when we have not yet used our budget, this causes
the NAPI system to end up with a messed up poll list.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
mii-tool can cause the driver to call msleep during nway reset,
bugzilla.kernel.org bug 8430. Fix by simply calling reinit_locked
outside of the spinlock, which is safe from ethtool, so it should be
safe from here.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix sparse warnings and problems from e1000 driver.
Added a sparse fix for the module param array index
-- Auke
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
These driver changes incorporate the proposed PCI-X / PCI-Express read byte
count interface. Reading and setting those valuse doesn't take place
"manually", instead wrapping functions are called to allow quirks for some
PCI bridges.
Signed-off by: Peter Oruba <peter.oruba@amd.com>
Based on work by Stephen Hemminger <shemminger@linux-foundation.org>
Acked-by: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
It's been a useless no-op for long enough in 2.6 so I figured it's time to
remove it. The number of people that could object because they're
maintaining unified 2.4 and 2.6 drivers is probably rather small.
[ Handled drivers added by netdev tree and some missed IRDA cases... -DaveM ]
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Several devices have multiple independant RX queues per net
device, and some have a single interrupt doorbell for several
queues.
In either case, it's easier to support layouts like that if the
structure representing the poll is independant from the net
device itself.
The signature of the ->poll() call back goes from:
int foo_poll(struct net_device *dev, int *budget)
to
int foo_poll(struct napi_struct *napi, int budget)
The caller is returned the number of RX packets processed (or
the number of "NAPI credits" consumed if you want to get
abstract). The callee no longer messes around bumping
dev->quota, *budget, etc. because that is all handled in the
caller upon return.
The napi_struct is to be embedded in the device driver private data
structures.
Furthermore, it is the driver's responsibility to disable all NAPI
instances in it's ->stop() device close handler. Since the
napi_struct is privatized into the driver's private data structures,
only the driver knows how to get at all of the napi_struct instances
it may have per-device.
With lots of help and suggestions from Rusty Russell, Roland Dreier,
Michael Chan, Jeff Garzik, and Jamal Hadi Salim.
Bug fixes from Thomas Graf, Roland Dreier, Peter Zijlstra,
Joseph Fannin, Scott Wood, Hans J. Koch, and Michael Chan.
[ Ported to current tree and all drivers converted. Integrated
Stephen's follow-on kerneldoc additions, and restored poll_list
handling to the old style to fix mutual exclusion issues. -DaveM ]
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
This blade-specific board form factor is identical to the 82571EB
board.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
This patch adds support for 2 new board variants:
- A Quad port fiber 82571 board
- A blade version of the 82571 quad copper board
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Instead of all drivers reading pci config space to get the revision
ID, they can now use the pci_device->revision member.
This exposes some issues where drivers where reading a word or a dword
for the revision number, and adding useless error-handling around the
read. Some drivers even just read it for no purpose of all.
In devices where the revision ID is being copied over and used in what
appears to be the equivalent of hotpath, I have left the copy code
and the cached copy as not to influence the driver's performance.
Compile tested with make all{yes,mod}config on x86_64 and i386.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Acked-by: Dave Jones <davej@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
To assure the symmetry of poll enable/disable in up/down, we should
initialize the netdevice to be poll_disabled at load time. Doing
this after register_netdevice leaves us open to another race, so
lets move all the netif_* calls above register_netdevice so the
stack starts out how we expect it to be.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Doug Chapman <doug.chapman@hp.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
This restores the previously removed netif_poll_enable call in e1000_open.
It's needed on all but the first call to e1000_open for a NIC as
e1000_close always calls netif_poll_disable.
netif_poll_enable can only be called safely if no polls have been
scheduled. This should be the case as long as we don't enter our IRQ
handler.
In order to guarantee this we explicitly disable IRQs as early as possible
when we're probing the NIC.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "Kok, Auke" <auke-jan.h.kok@intel.com>
Cc: Jeff Garzik <jeff@garzik.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Herbert Xu wrote:
"netif_poll_enable can only be called if you've previously called
netif_poll_disable. Otherwise a poll might already be in action
and you may get a crash like this."
Removing the call to netif_poll_enable in e1000_open should fix this issue,
the only other call to netif_poll_enable is in e1000_up() which is only
reached after a device reset or resume.
Bugzilla: http://bugzilla.kernel.org/show_bug.cgi?id=8455https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=240339
Tested by Doug Chapman <doug.chapman@hp.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
pci_enable_msi failure is a normal event so we should not print any error.
Going over the code I spotted a missing pci_disable_msi() leak when irq
allocation fails. The whole code also needed a cleanup, so I combined the
two different calls to pci_request_irq into a single call making this
look a lot better. All #ifdef CONFIG_PCI_MSI's have been removed.
Compile tested with both CONFIG_PCI_MSI enabled and disabled.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
flush_work(wq, work) doesn't need the first parameter, we can use cwq->wq
(this was possible from the very beginnig, I missed this). So we can unify
flush_work_keventd and flush_work.
Also, rename flush_work() to cancel_work_sync() and fix all callers.
Perhaps this is not the best name, but "flush_work" is really bad.
(akpm: this is why the earlier patches bypassed maintainers)
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Jeff Garzik <jeff@garzik.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Tejun Heo <htejun@gmail.com>
Cc: Auke Kok <auke-jan.h.kok@intel.com>,
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Switch e1000 over to flush_work_keventd(). This probably fixes a netdev-close
versus linkwatch rtnl_lock() deadlock which nobody knew about.
(akpm: bypassed maintainers, sorry. There are other patches which depend on
this)
Cc: "Maciej W. Rozycki" <macro@linux-mips.org>
Cc: David Howells <dhowells@redhat.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jeff Garzik <jeff@garzik.org>
Acked-by: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Oleg Nesterov <oleg@tv-sign.ru>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Use the round_jiffies() function in e1000.
These timers all were of the "about once a second" or "about once every X
seconds" variety and several showed up in the "what wakes the cpu up" profiles
that the tickless patches provide. Some timers are highly dynamic based on
network load; but even on low activity systems they still show up so the
rounding is done only in cases of low activity, allowing higher frequency
timers in the high activity case.
The various hardware watchdogs are an obvious case; they run every 2 seconds
but aren't otherwise specific of exactly when they need to run.
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
* 'e1000-fixes' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6:
e1000: FIX: Stop raw interrupts disabled nag from RT
e1000: FIX: firmware handover bits
e1000: FIX: be ready for incoming irq at pci_request_irq
Current e1000_xmit_frame spews raw interrupt disabled nag messages when
used with RT kernel patches. This patch uses spin_trylock_irqsave,
which allows RT patches to properly manage the irq semantics.
Signed-off-by: Mark Huth <mhuth@mvista.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Upon code inspection it was spotted that the firmware handover bit get/set
mismatched, which may have resulted in management issues on PCI-E
adapters. Setting them correctly may fix some management issues such
as arp routing etc.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
DEBUG_SHIRQ code exposed that e1000 was not ready for incoming interrupts
after having called pci_request_irq. This obviously requires us to finish
our software setup which assigns the irq handler before we request the
irq.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
When csum_offset was introduced we did a conversion from csum to
csum_offset where applicable. A couple of drivers were missed in
this process.
It was harmless to begin with since the two fields coincided. Now
that we've made them different with the addition of csum_start, the
missed drivers must be converted or they can't send packets out at
all that require checksum offload.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
To clearly state the intent of copying to linear sk_buffs, _offset being a
overly long variant but interesting for the sake of saving some bytes.
Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
So that it is also an offset from skb->head, reduces its size from 8 to 4 bytes
on 64bit architectures, allowing us to combine the 4 bytes hole left by the
layer headers conversion, reducing struct sk_buff size to 256 bytes, i.e. 4
64byte cachelines, and since the sk_buff slab cache is SLAB_HWCACHE_ALIGN...
:-)
Many calculations that previously required that skb->{transport,network,
mac}_header be first converted to a pointer now can be done directly, being
meaningful as offsets or pointers.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The ip_hdrlen() buddy, created to reduce the number of skb->h.th-> uses and to
avoid the longer, open coded equivalent.
Ditched a no-op in bnx2 in the process.
I wonder if we should have a BUG_ON(skb->h.th->doff < 5) in tcp_optlen()...
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
For the quite common 'skb->h.raw - skb->data' sequence.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now the skb->nh union has just one member, .raw, i.e. it is just like the
skb->mac union, strange, no? I'm just leaving it like that till the transport
layer is done with, when we'll rename skb->mac.raw to skb->mac_header (or
->mac_header_offset?), ditto for ->{h,nh}.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
For the quite common 'skb->nh.raw - skb->data' sequence.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This reverts commit 60cba200f1. It's been
linked to lockups of the e1000 hardware, see for example
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=229603
but it's likely that the commit itself is not really introducing the
bug, but just allowing an unrelated problem to rear its ugly head (ie
one current working theory is that the code exposes us to a hardware
race condition by decreasing the amount of time we spend in each NAPI
poll cycle).
We'll revert it until root cause is known. Intel has a repeatable
reproduction on two different machines and bus traces of the hardware
doing something bad.
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Jeff Garzik <jeff@garzik.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg KH <gregkh@suse.de>
Cc: Dave Jones <davej@redhat.com>
Cc: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This patch splits the vlan_group struct into a multi-allocated struct. On
x86_64, the size of the original struct is a little more than 32KB, causing
a 4-order allocation, which is prune to problems caused by buddy-system
external fragmentation conditions.
I couldn't just use vmalloc() because vfree() cannot be called in the
softirq context of the RCU callback.
Signed-off-by: Dan Aloni <da-x@monatomic.org>
Acked-by: Jeff Garzik <jeff@garzik.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
This reverts commit d2ed16356f.
As Thomas Gleixner reports:
"e1000 is not working anymore. ifup fails permanentely.
ADDRCONF(NETDEV_UP): eth0: link is not ready
nothing else"
The broken commit was identified with "git bisect".
Auke Kok says:
"I think we need to drop this now. The report that says that this
*fixes* something might have been on regular interrupts only. I
currently suspect that it breaks all MSI interrupts, which would make
sense if I look a the code. Very bad indeed."
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Acked-by: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Now that 2.6.19 provides a proper implementation that saves MSI, PCI-E
config space, we can have the e1000 driver use those instead of it's
custom implementation.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>