L: netdev@vger.kernel.org
S: Maintained
F: Documentation/networking/vortex.txt
-F: drivers/net/3c59x.c
+F: drivers/net/ethernet/3com/3c59x.c
3CR990 NETWORK DRIVER
M: David Dillow <dave@thedillows.org>
L: netdev@vger.kernel.org
S: Maintained
-F: drivers/net/typhoon*
+F: drivers/net/ethernet/3com/typhoon*
3WARE SAS/SATA-RAID SCSI DRIVERS (3W-XXXX, 3W-9XXX, 3W-SAS)
M: Adam Radford <linuxraid@lsi.com>
M: Jes Sorensen <jes@trained-monkey.org>
L: linux-acenic@sunsite.dk
S: Maintained
-F: drivers/net/acenic*
+F: drivers/net/ethernet/3com/acenic*
ACER ASPIRE ONE TEMPERATURE AND FAN DRIVER
M: Peter Feuerer <peter@piie.net>
+++ /dev/null
-/* 3c501.c: A 3Com 3c501 Ethernet driver for Linux. */
-/*
- Written 1992,1993,1994 Donald Becker
-
- Copyright 1993 United States Government as represented by the
- Director, National Security Agency. This software may be used and
- distributed according to the terms of the GNU General Public License,
- incorporated herein by reference.
-
- This is a device driver for the 3Com Etherlink 3c501.
- Do not purchase this card, even as a joke. It's performance is horrible,
- and it breaks in many ways.
-
- The original author may be reached as becker@scyld.com, or C/O
- Scyld Computing Corporation
- 410 Severn Ave., Suite 210
- Annapolis MD 21403
-
- Fixed (again!) the missing interrupt locking on TX/RX shifting.
- Alan Cox <alan@lxorguk.ukuu.org.uk>
-
- Removed calls to init_etherdev since they are no longer needed, and
- cleaned up modularization just a bit. The driver still allows only
- the default address for cards when loaded as a module, but that's
- really less braindead than anyone using a 3c501 board. :)
- 19950208 (invid@msen.com)
-
- Added traps for interrupts hitting the window as we clear and TX load
- the board. Now getting 150K/second FTP with a 3c501 card. Still playing
- with a TX-TX optimisation to see if we can touch 180-200K/second as seems
- theoretically maximum.
- 19950402 Alan Cox <alan@lxorguk.ukuu.org.uk>
-
- Cleaned up for 2.3.x because we broke SMP now.
- 20000208 Alan Cox <alan@lxorguk.ukuu.org.uk>
-
- Check up pass for 2.5. Nothing significant changed
- 20021009 Alan Cox <alan@lxorguk.ukuu.org.uk>
-
- Fixed zero fill corner case
- 20030104 Alan Cox <alan@lxorguk.ukuu.org.uk>
-
-
- For the avoidance of doubt the "preferred form" of this code is one which
- is in an open non patent encumbered format. Where cryptographic key signing
- forms part of the process of creating an executable the information
- including keys needed to generate an equivalently functional executable
- are deemed to be part of the source code.
-
-*/
-
-
-/**
- * DOC: 3c501 Card Notes
- *
- * Some notes on this thing if you have to hack it. [Alan]
- *
- * Some documentation is available from 3Com. Due to the boards age
- * standard responses when you ask for this will range from 'be serious'
- * to 'give it to a museum'. The documentation is incomplete and mostly
- * of historical interest anyway.
- *
- * The basic system is a single buffer which can be used to receive or
- * transmit a packet. A third command mode exists when you are setting
- * things up.
- *
- * If it's transmitting it's not receiving and vice versa. In fact the
- * time to get the board back into useful state after an operation is
- * quite large.
- *
- * The driver works by keeping the board in receive mode waiting for a
- * packet to arrive. When one arrives it is copied out of the buffer
- * and delivered to the kernel. The card is reloaded and off we go.
- *
- * When transmitting lp->txing is set and the card is reset (from
- * receive mode) [possibly losing a packet just received] to command
- * mode. A packet is loaded and transmit mode triggered. The interrupt
- * handler runs different code for transmit interrupts and can handle
- * returning to receive mode or retransmissions (yes you have to help
- * out with those too).
- *
- * DOC: Problems
- *
- * There are a wide variety of undocumented error returns from the card
- * and you basically have to kick the board and pray if they turn up. Most
- * only occur under extreme load or if you do something the board doesn't
- * like (eg touching a register at the wrong time).
- *
- * The driver is less efficient than it could be. It switches through
- * receive mode even if more transmits are queued. If this worries you buy
- * a real Ethernet card.
- *
- * The combination of slow receive restart and no real multicast
- * filter makes the board unusable with a kernel compiled for IP
- * multicasting in a real multicast environment. That's down to the board,
- * but even with no multicast programs running a multicast IP kernel is
- * in group 224.0.0.1 and you will therefore be listening to all multicasts.
- * One nv conference running over that Ethernet and you can give up.
- *
- */
-
-#define DRV_NAME "3c501"
-#define DRV_VERSION "2002/10/09"
-
-
-static const char version[] =
- DRV_NAME ".c: " DRV_VERSION " Alan Cox (alan@lxorguk.ukuu.org.uk).\n";
-
-/*
- * Braindamage remaining:
- * The 3c501 board.
- */
-
-#include <linux/module.h>
-
-#include <linux/kernel.h>
-#include <linux/fcntl.h>
-#include <linux/ioport.h>
-#include <linux/interrupt.h>
-#include <linux/string.h>
-#include <linux/errno.h>
-#include <linux/spinlock.h>
-#include <linux/ethtool.h>
-#include <linux/delay.h>
-#include <linux/bitops.h>
-
-#include <asm/uaccess.h>
-#include <asm/io.h>
-
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/skbuff.h>
-#include <linux/init.h>
-
-#include "3c501.h"
-
-/*
- * The boilerplate probe code.
- */
-
-static int io = 0x280;
-static int irq = 5;
-static int mem_start;
-
-/**
- * el1_probe: - probe for a 3c501
- * @dev: The device structure passed in to probe.
- *
- * This can be called from two places. The network layer will probe using
- * a device structure passed in with the probe information completed. For a
- * modular driver we use #init_module to fill in our own structure and probe
- * for it.
- *
- * Returns 0 on success. ENXIO if asked not to probe and ENODEV if asked to
- * probe and failing to find anything.
- */
-
-struct net_device * __init el1_probe(int unit)
-{
- struct net_device *dev = alloc_etherdev(sizeof(struct net_local));
- static const unsigned ports[] = { 0x280, 0x300, 0};
- const unsigned *port;
- int err = 0;
-
- if (!dev)
- return ERR_PTR(-ENOMEM);
-
- if (unit >= 0) {
- sprintf(dev->name, "eth%d", unit);
- netdev_boot_setup_check(dev);
- io = dev->base_addr;
- irq = dev->irq;
- mem_start = dev->mem_start & 7;
- }
-
- if (io > 0x1ff) { /* Check a single specified location. */
- err = el1_probe1(dev, io);
- } else if (io != 0) {
- err = -ENXIO; /* Don't probe at all. */
- } else {
- for (port = ports; *port && el1_probe1(dev, *port); port++)
- ;
- if (!*port)
- err = -ENODEV;
- }
- if (err)
- goto out;
- err = register_netdev(dev);
- if (err)
- goto out1;
- return dev;
-out1:
- release_region(dev->base_addr, EL1_IO_EXTENT);
-out:
- free_netdev(dev);
- return ERR_PTR(err);
-}
-
-static const struct net_device_ops el_netdev_ops = {
- .ndo_open = el_open,
- .ndo_stop = el1_close,
- .ndo_start_xmit = el_start_xmit,
- .ndo_tx_timeout = el_timeout,
- .ndo_set_multicast_list = set_multicast_list,
- .ndo_change_mtu = eth_change_mtu,
- .ndo_set_mac_address = eth_mac_addr,
- .ndo_validate_addr = eth_validate_addr,
-};
-
-/**
- * el1_probe1:
- * @dev: The device structure to use
- * @ioaddr: An I/O address to probe at.
- *
- * The actual probe. This is iterated over by #el1_probe in order to
- * check all the applicable device locations.
- *
- * Returns 0 for a success, in which case the device is activated,
- * EAGAIN if the IRQ is in use by another driver, and ENODEV if the
- * board cannot be found.
- */
-
-static int __init el1_probe1(struct net_device *dev, int ioaddr)
-{
- struct net_local *lp;
- const char *mname; /* Vendor name */
- unsigned char station_addr[6];
- int autoirq = 0;
- int i;
-
- /*
- * Reserve I/O resource for exclusive use by this driver
- */
-
- if (!request_region(ioaddr, EL1_IO_EXTENT, DRV_NAME))
- return -ENODEV;
-
- /*
- * Read the station address PROM data from the special port.
- */
-
- for (i = 0; i < 6; i++) {
- outw(i, ioaddr + EL1_DATAPTR);
- station_addr[i] = inb(ioaddr + EL1_SAPROM);
- }
- /*
- * Check the first three octets of the S.A. for 3Com's prefix, or
- * for the Sager NP943 prefix.
- */
-
- if (station_addr[0] == 0x02 && station_addr[1] == 0x60 &&
- station_addr[2] == 0x8c)
- mname = "3c501";
- else if (station_addr[0] == 0x00 && station_addr[1] == 0x80 &&
- station_addr[2] == 0xC8)
- mname = "NP943";
- else {
- release_region(ioaddr, EL1_IO_EXTENT);
- return -ENODEV;
- }
-
- /*
- * We auto-IRQ by shutting off the interrupt line and letting it
- * float high.
- */
-
- dev->irq = irq;
-
- if (dev->irq < 2) {
- unsigned long irq_mask;
-
- irq_mask = probe_irq_on();
- inb(RX_STATUS); /* Clear pending interrupts. */
- inb(TX_STATUS);
- outb(AX_LOOP + 1, AX_CMD);
-
- outb(0x00, AX_CMD);
-
- mdelay(20);
- autoirq = probe_irq_off(irq_mask);
-
- if (autoirq == 0) {
- pr_warning("%s probe at %#x failed to detect IRQ line.\n",
- mname, ioaddr);
- release_region(ioaddr, EL1_IO_EXTENT);
- return -EAGAIN;
- }
- }
-
- outb(AX_RESET+AX_LOOP, AX_CMD); /* Loopback mode. */
- dev->base_addr = ioaddr;
- memcpy(dev->dev_addr, station_addr, ETH_ALEN);
-
- if (mem_start & 0xf)
- el_debug = mem_start & 0x7;
- if (autoirq)
- dev->irq = autoirq;
-
- pr_info("%s: %s EtherLink at %#lx, using %sIRQ %d.\n",
- dev->name, mname, dev->base_addr,
- autoirq ? "auto":"assigned ", dev->irq);
-
-#ifdef CONFIG_IP_MULTICAST
- pr_warning("WARNING: Use of the 3c501 in a multicast kernel is NOT recommended.\n");
-#endif
-
- if (el_debug)
- pr_debug("%s", version);
-
- lp = netdev_priv(dev);
- memset(lp, 0, sizeof(struct net_local));
- spin_lock_init(&lp->lock);
-
- /*
- * The EL1-specific entries in the device structure.
- */
-
- dev->netdev_ops = &el_netdev_ops;
- dev->watchdog_timeo = HZ;
- dev->ethtool_ops = &netdev_ethtool_ops;
- return 0;
-}
-
-/**
- * el1_open:
- * @dev: device that is being opened
- *
- * When an ifconfig is issued which changes the device flags to include
- * IFF_UP this function is called. It is only called when the change
- * occurs, not when the interface remains up. #el1_close will be called
- * when it goes down.
- *
- * Returns 0 for a successful open, or -EAGAIN if someone has run off
- * with our interrupt line.
- */
-
-static int el_open(struct net_device *dev)
-{
- int retval;
- int ioaddr = dev->base_addr;
- struct net_local *lp = netdev_priv(dev);
- unsigned long flags;
-
- if (el_debug > 2)
- pr_debug("%s: Doing el_open()...\n", dev->name);
-
- retval = request_irq(dev->irq, el_interrupt, 0, dev->name, dev);
- if (retval)
- return retval;
-
- spin_lock_irqsave(&lp->lock, flags);
- el_reset(dev);
- spin_unlock_irqrestore(&lp->lock, flags);
-
- lp->txing = 0; /* Board in RX mode */
- outb(AX_RX, AX_CMD); /* Aux control, irq and receive enabled */
- netif_start_queue(dev);
- return 0;
-}
-
-/**
- * el_timeout:
- * @dev: The 3c501 card that has timed out
- *
- * Attempt to restart the board. This is basically a mixture of extreme
- * violence and prayer
- *
- */
-
-static void el_timeout(struct net_device *dev)
-{
- struct net_local *lp = netdev_priv(dev);
- int ioaddr = dev->base_addr;
-
- if (el_debug)
- pr_debug("%s: transmit timed out, txsr %#2x axsr=%02x rxsr=%02x.\n",
- dev->name, inb(TX_STATUS),
- inb(AX_STATUS), inb(RX_STATUS));
- dev->stats.tx_errors++;
- outb(TX_NORM, TX_CMD);
- outb(RX_NORM, RX_CMD);
- outb(AX_OFF, AX_CMD); /* Just trigger a false interrupt. */
- outb(AX_RX, AX_CMD); /* Aux control, irq and receive enabled */
- lp->txing = 0; /* Ripped back in to RX */
- netif_wake_queue(dev);
-}
-
-
-/**
- * el_start_xmit:
- * @skb: The packet that is queued to be sent
- * @dev: The 3c501 card we want to throw it down
- *
- * Attempt to send a packet to a 3c501 card. There are some interesting
- * catches here because the 3c501 is an extremely old and therefore
- * stupid piece of technology.
- *
- * If we are handling an interrupt on the other CPU we cannot load a packet
- * as we may still be attempting to retrieve the last RX packet buffer.
- *
- * When a transmit times out we dump the card into control mode and just
- * start again. It happens enough that it isn't worth logging.
- *
- * We avoid holding the spin locks when doing the packet load to the board.
- * The device is very slow, and its DMA mode is even slower. If we held the
- * lock while loading 1500 bytes onto the controller we would drop a lot of
- * serial port characters. This requires we do extra locking, but we have
- * no real choice.
- */
-
-static netdev_tx_t el_start_xmit(struct sk_buff *skb, struct net_device *dev)
-{
- struct net_local *lp = netdev_priv(dev);
- int ioaddr = dev->base_addr;
- unsigned long flags;
-
- /*
- * Avoid incoming interrupts between us flipping txing and flipping
- * mode as the driver assumes txing is a faithful indicator of card
- * state
- */
-
- spin_lock_irqsave(&lp->lock, flags);
-
- /*
- * Avoid timer-based retransmission conflicts.
- */
-
- netif_stop_queue(dev);
-
- do {
- int len = skb->len;
- int pad = 0;
- int gp_start;
- unsigned char *buf = skb->data;
-
- if (len < ETH_ZLEN)
- pad = ETH_ZLEN - len;
-
- gp_start = 0x800 - (len + pad);
-
- lp->tx_pkt_start = gp_start;
- lp->collisions = 0;
-
- dev->stats.tx_bytes += skb->len;
-
- /*
- * Command mode with status cleared should [in theory]
- * mean no more interrupts can be pending on the card.
- */
-
- outb_p(AX_SYS, AX_CMD);
- inb_p(RX_STATUS);
- inb_p(TX_STATUS);
-
- lp->loading = 1;
- lp->txing = 1;
-
- /*
- * Turn interrupts back on while we spend a pleasant
- * afternoon loading bytes into the board
- */
-
- spin_unlock_irqrestore(&lp->lock, flags);
-
- /* Set rx packet area to 0. */
- outw(0x00, RX_BUF_CLR);
- /* aim - packet will be loaded into buffer start */
- outw(gp_start, GP_LOW);
- /* load buffer (usual thing each byte increments the pointer) */
- outsb(DATAPORT, buf, len);
- if (pad) {
- while (pad--) /* Zero fill buffer tail */
- outb(0, DATAPORT);
- }
- /* the board reuses the same register */
- outw(gp_start, GP_LOW);
-
- if (lp->loading != 2) {
- /* fire ... Trigger xmit. */
- outb(AX_XMIT, AX_CMD);
- lp->loading = 0;
- if (el_debug > 2)
- pr_debug(" queued xmit.\n");
- dev_kfree_skb(skb);
- return NETDEV_TX_OK;
- }
- /* A receive upset our load, despite our best efforts */
- if (el_debug > 2)
- pr_debug("%s: burped during tx load.\n", dev->name);
- spin_lock_irqsave(&lp->lock, flags);
- } while (1);
-}
-
-/**
- * el_interrupt:
- * @irq: Interrupt number
- * @dev_id: The 3c501 that burped
- *
- * Handle the ether interface interrupts. The 3c501 needs a lot more
- * hand holding than most cards. In particular we get a transmit interrupt
- * with a collision error because the board firmware isn't capable of rewinding
- * its own transmit buffer pointers. It can however count to 16 for us.
- *
- * On the receive side the card is also very dumb. It has no buffering to
- * speak of. We simply pull the packet out of its PIO buffer (which is slow)
- * and queue it for the kernel. Then we reset the card for the next packet.
- *
- * We sometimes get surprise interrupts late both because the SMP IRQ delivery
- * is message passing and because the card sometimes seems to deliver late. I
- * think if it is part way through a receive and the mode is changed it carries
- * on receiving and sends us an interrupt. We have to band aid all these cases
- * to get a sensible 150kBytes/second performance. Even then you want a small
- * TCP window.
- */
-
-static irqreturn_t el_interrupt(int irq, void *dev_id)
-{
- struct net_device *dev = dev_id;
- struct net_local *lp;
- int ioaddr;
- int axsr; /* Aux. status reg. */
-
- ioaddr = dev->base_addr;
- lp = netdev_priv(dev);
-
- spin_lock(&lp->lock);
-
- /*
- * What happened ?
- */
-
- axsr = inb(AX_STATUS);
-
- /*
- * Log it
- */
-
- if (el_debug > 3)
- pr_debug("%s: el_interrupt() aux=%#02x\n", dev->name, axsr);
-
- if (lp->loading == 1 && !lp->txing)
- pr_warning("%s: Inconsistent state loading while not in tx\n",
- dev->name);
-
- if (lp->txing) {
- /*
- * Board in transmit mode. May be loading. If we are
- * loading we shouldn't have got this.
- */
- int txsr = inb(TX_STATUS);
-
- if (lp->loading == 1) {
- if (el_debug > 2)
- pr_debug("%s: Interrupt while loading [txsr=%02x gp=%04x rp=%04x]\n",
- dev->name, txsr, inw(GP_LOW), inw(RX_LOW));
-
- /* Force a reload */
- lp->loading = 2;
- spin_unlock(&lp->lock);
- goto out;
- }
- if (el_debug > 6)
- pr_debug("%s: txsr=%02x gp=%04x rp=%04x\n", dev->name,
- txsr, inw(GP_LOW), inw(RX_LOW));
-
- if ((axsr & 0x80) && (txsr & TX_READY) == 0) {
- /*
- * FIXME: is there a logic to whether to keep
- * on trying or reset immediately ?
- */
- if (el_debug > 1)
- pr_debug("%s: Unusual interrupt during Tx, txsr=%02x axsr=%02x gp=%03x rp=%03x.\n",
- dev->name, txsr, axsr,
- inw(ioaddr + EL1_DATAPTR),
- inw(ioaddr + EL1_RXPTR));
- lp->txing = 0;
- netif_wake_queue(dev);
- } else if (txsr & TX_16COLLISIONS) {
- /*
- * Timed out
- */
- if (el_debug)
- pr_debug("%s: Transmit failed 16 times, Ethernet jammed?\n", dev->name);
- outb(AX_SYS, AX_CMD);
- lp->txing = 0;
- dev->stats.tx_aborted_errors++;
- netif_wake_queue(dev);
- } else if (txsr & TX_COLLISION) {
- /*
- * Retrigger xmit.
- */
-
- if (el_debug > 6)
- pr_debug("%s: retransmitting after a collision.\n", dev->name);
- /*
- * Poor little chip can't reset its own start
- * pointer
- */
-
- outb(AX_SYS, AX_CMD);
- outw(lp->tx_pkt_start, GP_LOW);
- outb(AX_XMIT, AX_CMD);
- dev->stats.collisions++;
- spin_unlock(&lp->lock);
- goto out;
- } else {
- /*
- * It worked.. we will now fall through and receive
- */
- dev->stats.tx_packets++;
- if (el_debug > 6)
- pr_debug("%s: Tx succeeded %s\n", dev->name,
- (txsr & TX_RDY) ? "." : "but tx is busy!");
- /*
- * This is safe the interrupt is atomic WRT itself.
- */
- lp->txing = 0;
- /* In case more to transmit */
- netif_wake_queue(dev);
- }
- } else {
- /*
- * In receive mode.
- */
-
- int rxsr = inb(RX_STATUS);
- if (el_debug > 5)
- pr_debug("%s: rxsr=%02x txsr=%02x rp=%04x\n",
- dev->name, rxsr, inb(TX_STATUS), inw(RX_LOW));
- /*
- * Just reading rx_status fixes most errors.
- */
- if (rxsr & RX_MISSED)
- dev->stats.rx_missed_errors++;
- else if (rxsr & RX_RUNT) {
- /* Handled to avoid board lock-up. */
- dev->stats.rx_length_errors++;
- if (el_debug > 5)
- pr_debug("%s: runt.\n", dev->name);
- } else if (rxsr & RX_GOOD) {
- /*
- * Receive worked.
- */
- el_receive(dev);
- } else {
- /*
- * Nothing? Something is broken!
- */
- if (el_debug > 2)
- pr_debug("%s: No packet seen, rxsr=%02x **resetting 3c501***\n",
- dev->name, rxsr);
- el_reset(dev);
- }
- }
-
- /*
- * Move into receive mode
- */
-
- outb(AX_RX, AX_CMD);
- outw(0x00, RX_BUF_CLR);
- inb(RX_STATUS); /* Be certain that interrupts are cleared. */
- inb(TX_STATUS);
- spin_unlock(&lp->lock);
-out:
- return IRQ_HANDLED;
-}
-
-
-/**
- * el_receive:
- * @dev: Device to pull the packets from
- *
- * We have a good packet. Well, not really "good", just mostly not broken.
- * We must check everything to see if it is good. In particular we occasionally
- * get wild packet sizes from the card. If the packet seems sane we PIO it
- * off the card and queue it for the protocol layers.
- */
-
-static void el_receive(struct net_device *dev)
-{
- int ioaddr = dev->base_addr;
- int pkt_len;
- struct sk_buff *skb;
-
- pkt_len = inw(RX_LOW);
-
- if (el_debug > 4)
- pr_debug(" el_receive %d.\n", pkt_len);
-
- if (pkt_len < 60 || pkt_len > 1536) {
- if (el_debug)
- pr_debug("%s: bogus packet, length=%d\n",
- dev->name, pkt_len);
- dev->stats.rx_over_errors++;
- return;
- }
-
- /*
- * Command mode so we can empty the buffer
- */
-
- outb(AX_SYS, AX_CMD);
- skb = dev_alloc_skb(pkt_len+2);
-
- /*
- * Start of frame
- */
-
- outw(0x00, GP_LOW);
- if (skb == NULL) {
- pr_info("%s: Memory squeeze, dropping packet.\n", dev->name);
- dev->stats.rx_dropped++;
- return;
- } else {
- skb_reserve(skb, 2); /* Force 16 byte alignment */
- /*
- * The read increments through the bytes. The interrupt
- * handler will fix the pointer when it returns to
- * receive mode.
- */
- insb(DATAPORT, skb_put(skb, pkt_len), pkt_len);
- skb->protocol = eth_type_trans(skb, dev);
- netif_rx(skb);
- dev->stats.rx_packets++;
- dev->stats.rx_bytes += pkt_len;
- }
-}
-
-/**
- * el_reset: Reset a 3c501 card
- * @dev: The 3c501 card about to get zapped
- *
- * Even resetting a 3c501 isn't simple. When you activate reset it loses all
- * its configuration. You must hold the lock when doing this. The function
- * cannot take the lock itself as it is callable from the irq handler.
- */
-
-static void el_reset(struct net_device *dev)
-{
- struct net_local *lp = netdev_priv(dev);
- int ioaddr = dev->base_addr;
-
- if (el_debug > 2)
- pr_info("3c501 reset...\n");
- outb(AX_RESET, AX_CMD); /* Reset the chip */
- /* Aux control, irq and loopback enabled */
- outb(AX_LOOP, AX_CMD);
- {
- int i;
- for (i = 0; i < 6; i++) /* Set the station address. */
- outb(dev->dev_addr[i], ioaddr + i);
- }
-
- outw(0, RX_BUF_CLR); /* Set rx packet area to 0. */
- outb(TX_NORM, TX_CMD); /* tx irq on done, collision */
- outb(RX_NORM, RX_CMD); /* Set Rx commands. */
- inb(RX_STATUS); /* Clear status. */
- inb(TX_STATUS);
- lp->txing = 0;
-}
-
-/**
- * el1_close:
- * @dev: 3c501 card to shut down
- *
- * Close a 3c501 card. The IFF_UP flag has been cleared by the user via
- * the SIOCSIFFLAGS ioctl. We stop any further transmissions being queued,
- * and then disable the interrupts. Finally we reset the chip. The effects
- * of the rest will be cleaned up by #el1_open. Always returns 0 indicating
- * a success.
- */
-
-static int el1_close(struct net_device *dev)
-{
- int ioaddr = dev->base_addr;
-
- if (el_debug > 2)
- pr_info("%s: Shutting down Ethernet card at %#x.\n",
- dev->name, ioaddr);
-
- netif_stop_queue(dev);
-
- /*
- * Free and disable the IRQ.
- */
-
- free_irq(dev->irq, dev);
- outb(AX_RESET, AX_CMD); /* Reset the chip */
-
- return 0;
-}
-
-/**
- * set_multicast_list:
- * @dev: The device to adjust
- *
- * Set or clear the multicast filter for this adaptor to use the best-effort
- * filtering supported. The 3c501 supports only three modes of filtering.
- * It always receives broadcasts and packets for itself. You can choose to
- * optionally receive all packets, or all multicast packets on top of this.
- */
-
-static void set_multicast_list(struct net_device *dev)
-{
- int ioaddr = dev->base_addr;
-
- if (dev->flags & IFF_PROMISC) {
- outb(RX_PROM, RX_CMD);
- inb(RX_STATUS);
- } else if (!netdev_mc_empty(dev) || dev->flags & IFF_ALLMULTI) {
- /* Multicast or all multicast is the same */
- outb(RX_MULT, RX_CMD);
- inb(RX_STATUS); /* Clear status. */
- } else {
- outb(RX_NORM, RX_CMD);
- inb(RX_STATUS);
- }
-}
-
-
-static void netdev_get_drvinfo(struct net_device *dev,
- struct ethtool_drvinfo *info)
-{
- strcpy(info->driver, DRV_NAME);
- strcpy(info->version, DRV_VERSION);
- sprintf(info->bus_info, "ISA 0x%lx", dev->base_addr);
-}
-
-static u32 netdev_get_msglevel(struct net_device *dev)
-{
- return debug;
-}
-
-static void netdev_set_msglevel(struct net_device *dev, u32 level)
-{
- debug = level;
-}
-
-static const struct ethtool_ops netdev_ethtool_ops = {
- .get_drvinfo = netdev_get_drvinfo,
- .get_msglevel = netdev_get_msglevel,
- .set_msglevel = netdev_set_msglevel,
-};
-
-#ifdef MODULE
-
-static struct net_device *dev_3c501;
-
-module_param(io, int, 0);
-module_param(irq, int, 0);
-MODULE_PARM_DESC(io, "EtherLink I/O base address");
-MODULE_PARM_DESC(irq, "EtherLink IRQ number");
-
-/**
- * init_module:
- *
- * When the driver is loaded as a module this function is called. We fake up
- * a device structure with the base I/O and interrupt set as if it were being
- * called from Space.c. This minimises the extra code that would otherwise
- * be required.
- *
- * Returns 0 for success or -EIO if a card is not found. Returning an error
- * here also causes the module to be unloaded
- */
-
-int __init init_module(void)
-{
- dev_3c501 = el1_probe(-1);
- if (IS_ERR(dev_3c501))
- return PTR_ERR(dev_3c501);
- return 0;
-}
-
-/**
- * cleanup_module:
- *
- * The module is being unloaded. We unhook our network device from the system
- * and then free up the resources we took when the card was found.
- */
-
-void __exit cleanup_module(void)
-{
- struct net_device *dev = dev_3c501;
- unregister_netdev(dev);
- release_region(dev->base_addr, EL1_IO_EXTENT);
- free_netdev(dev);
-}
-
-#endif /* MODULE */
-
-MODULE_AUTHOR("Donald Becker, Alan Cox");
-MODULE_DESCRIPTION("Support for the ancient 3Com 3c501 ethernet card");
-MODULE_LICENSE("GPL");
-
+++ /dev/null
-
-/*
- * Index to functions.
- */
-
-static int el1_probe1(struct net_device *dev, int ioaddr);
-static int el_open(struct net_device *dev);
-static void el_timeout(struct net_device *dev);
-static netdev_tx_t el_start_xmit(struct sk_buff *skb, struct net_device *dev);
-static irqreturn_t el_interrupt(int irq, void *dev_id);
-static void el_receive(struct net_device *dev);
-static void el_reset(struct net_device *dev);
-static int el1_close(struct net_device *dev);
-static void set_multicast_list(struct net_device *dev);
-static const struct ethtool_ops netdev_ethtool_ops;
-
-#define EL1_IO_EXTENT 16
-
-#ifndef EL_DEBUG
-#define EL_DEBUG 0 /* use 0 for production, 1 for devel., >2 for debug */
-#endif /* Anything above 5 is wordy death! */
-#define debug el_debug
-static int el_debug = EL_DEBUG;
-
-/*
- * Board-specific info in netdev_priv(dev).
- */
-
-struct net_local
-{
- int tx_pkt_start; /* The length of the current Tx packet. */
- int collisions; /* Tx collisions this packet */
- int loading; /* Spot buffer load collisions */
- int txing; /* True if card is in TX mode */
- spinlock_t lock; /* Serializing lock */
-};
-
-
-#define RX_STATUS (ioaddr + 0x06)
-#define RX_CMD RX_STATUS
-#define TX_STATUS (ioaddr + 0x07)
-#define TX_CMD TX_STATUS
-#define GP_LOW (ioaddr + 0x08)
-#define GP_HIGH (ioaddr + 0x09)
-#define RX_BUF_CLR (ioaddr + 0x0A)
-#define RX_LOW (ioaddr + 0x0A)
-#define RX_HIGH (ioaddr + 0x0B)
-#define SAPROM (ioaddr + 0x0C)
-#define AX_STATUS (ioaddr + 0x0E)
-#define AX_CMD AX_STATUS
-#define DATAPORT (ioaddr + 0x0F)
-#define TX_RDY 0x08 /* In TX_STATUS */
-
-#define EL1_DATAPTR 0x08
-#define EL1_RXPTR 0x0A
-#define EL1_SAPROM 0x0C
-#define EL1_DATAPORT 0x0f
-
-/*
- * Writes to the ax command register.
- */
-
-#define AX_OFF 0x00 /* Irq off, buffer access on */
-#define AX_SYS 0x40 /* Load the buffer */
-#define AX_XMIT 0x44 /* Transmit a packet */
-#define AX_RX 0x48 /* Receive a packet */
-#define AX_LOOP 0x0C /* Loopback mode */
-#define AX_RESET 0x80
-
-/*
- * Normal receive mode written to RX_STATUS. We must intr on short packets
- * to avoid bogus rx lockups.
- */
-
-#define RX_NORM 0xA8 /* 0x68 == all addrs, 0xA8 only to me. */
-#define RX_PROM 0x68 /* Senior Prom, uhmm promiscuous mode. */
-#define RX_MULT 0xE8 /* Accept multicast packets. */
-#define TX_NORM 0x0A /* Interrupt on everything that might hang the chip */
-
-/*
- * TX_STATUS register.
- */
-
-#define TX_COLLISION 0x02
-#define TX_16COLLISIONS 0x04
-#define TX_READY 0x08
-
-#define RX_RUNT 0x08
-#define RX_MISSED 0x01 /* Missed a packet due to 3c501 braindamage. */
-#define RX_GOOD 0x30 /* Good packet 0x20, or simple overflow 0x10. */
-
+++ /dev/null
-/* 3c509.c: A 3c509 EtherLink3 ethernet driver for linux. */
-/*
- Written 1993-2000 by Donald Becker.
-
- Copyright 1994-2000 by Donald Becker.
- Copyright 1993 United States Government as represented by the
- Director, National Security Agency. This software may be used and
- distributed according to the terms of the GNU General Public License,
- incorporated herein by reference.
-
- This driver is for the 3Com EtherLinkIII series.
-
- The author may be reached as becker@scyld.com, or C/O
- Scyld Computing Corporation
- 410 Severn Ave., Suite 210
- Annapolis MD 21403
-
- Known limitations:
- Because of the way 3c509 ISA detection works it's difficult to predict
- a priori which of several ISA-mode cards will be detected first.
-
- This driver does not use predictive interrupt mode, resulting in higher
- packet latency but lower overhead. If interrupts are disabled for an
- unusually long time it could also result in missed packets, but in
- practice this rarely happens.
-
-
- FIXES:
- Alan Cox: Removed the 'Unexpected interrupt' bug.
- Michael Meskes: Upgraded to Donald Becker's version 1.07.
- Alan Cox: Increased the eeprom delay. Regardless of
- what the docs say some people definitely
- get problems with lower (but in card spec)
- delays
- v1.10 4/21/97 Fixed module code so that multiple cards may be detected,
- other cleanups. -djb
- Andrea Arcangeli: Upgraded to Donald Becker's version 1.12.
- Rick Payne: Fixed SMP race condition
- v1.13 9/8/97 Made 'max_interrupt_work' an insmod-settable variable -djb
- v1.14 10/15/97 Avoided waiting..discard message for fast machines -djb
- v1.15 1/31/98 Faster recovery for Tx errors. -djb
- v1.16 2/3/98 Different ID port handling to avoid sound cards. -djb
- v1.18 12Mar2001 Andrew Morton
- - Avoid bogus detect of 3c590's (Andrzej Krzysztofowicz)
- - Reviewed against 1.18 from scyld.com
- v1.18a 17Nov2001 Jeff Garzik <jgarzik@pobox.com>
- - ethtool support
- v1.18b 1Mar2002 Zwane Mwaikambo <zwane@commfireservices.com>
- - Power Management support
- v1.18c 1Mar2002 David Ruggiero <jdr@farfalle.com>
- - Full duplex support
- v1.19 16Oct2002 Zwane Mwaikambo <zwane@linuxpower.ca>
- - Additional ethtool features
- v1.19a 28Oct2002 Davud Ruggiero <jdr@farfalle.com>
- - Increase *read_eeprom udelay to workaround oops with 2 cards.
- v1.19b 08Nov2002 Marc Zyngier <maz@wild-wind.fr.eu.org>
- - Introduce driver model for EISA cards.
- v1.20 04Feb2008 Ondrej Zary <linux@rainbow-software.org>
- - convert to isa_driver and pnp_driver and some cleanups
-*/
-
-#define DRV_NAME "3c509"
-#define DRV_VERSION "1.20"
-#define DRV_RELDATE "04Feb2008"
-
-/* A few values that may be tweaked. */
-
-/* Time in jiffies before concluding the transmitter is hung. */
-#define TX_TIMEOUT (400*HZ/1000)
-
-#include <linux/module.h>
-#include <linux/mca.h>
-#include <linux/isa.h>
-#include <linux/pnp.h>
-#include <linux/string.h>
-#include <linux/interrupt.h>
-#include <linux/errno.h>
-#include <linux/in.h>
-#include <linux/ioport.h>
-#include <linux/init.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/pm.h>
-#include <linux/skbuff.h>
-#include <linux/delay.h> /* for udelay() */
-#include <linux/spinlock.h>
-#include <linux/ethtool.h>
-#include <linux/device.h>
-#include <linux/eisa.h>
-#include <linux/bitops.h>
-
-#include <asm/uaccess.h>
-#include <asm/io.h>
-#include <asm/irq.h>
-
-static char version[] __devinitdata = DRV_NAME ".c:" DRV_VERSION " " DRV_RELDATE " becker@scyld.com\n";
-
-#ifdef EL3_DEBUG
-static int el3_debug = EL3_DEBUG;
-#else
-static int el3_debug = 2;
-#endif
-
-/* Used to do a global count of all the cards in the system. Must be
- * a global variable so that the mca/eisa probe routines can increment
- * it */
-static int el3_cards = 0;
-#define EL3_MAX_CARDS 8
-
-/* To minimize the size of the driver source I only define operating
- constants if they are used several times. You'll need the manual
- anyway if you want to understand driver details. */
-/* Offsets from base I/O address. */
-#define EL3_DATA 0x00
-#define EL3_CMD 0x0e
-#define EL3_STATUS 0x0e
-#define EEPROM_READ 0x80
-
-#define EL3_IO_EXTENT 16
-
-#define EL3WINDOW(win_num) outw(SelectWindow + (win_num), ioaddr + EL3_CMD)
-
-
-/* The top five bits written to EL3_CMD are a command, the lower
- 11 bits are the parameter, if applicable. */
-enum c509cmd {
- TotalReset = 0<<11, SelectWindow = 1<<11, StartCoax = 2<<11,
- RxDisable = 3<<11, RxEnable = 4<<11, RxReset = 5<<11, RxDiscard = 8<<11,
- TxEnable = 9<<11, TxDisable = 10<<11, TxReset = 11<<11,
- FakeIntr = 12<<11, AckIntr = 13<<11, SetIntrEnb = 14<<11,
- SetStatusEnb = 15<<11, SetRxFilter = 16<<11, SetRxThreshold = 17<<11,
- SetTxThreshold = 18<<11, SetTxStart = 19<<11, StatsEnable = 21<<11,
- StatsDisable = 22<<11, StopCoax = 23<<11, PowerUp = 27<<11,
- PowerDown = 28<<11, PowerAuto = 29<<11};
-
-enum c509status {
- IntLatch = 0x0001, AdapterFailure = 0x0002, TxComplete = 0x0004,
- TxAvailable = 0x0008, RxComplete = 0x0010, RxEarly = 0x0020,
- IntReq = 0x0040, StatsFull = 0x0080, CmdBusy = 0x1000, };
-
-/* The SetRxFilter command accepts the following classes: */
-enum RxFilter {
- RxStation = 1, RxMulticast = 2, RxBroadcast = 4, RxProm = 8 };
-
-/* Register window 1 offsets, the window used in normal operation. */
-#define TX_FIFO 0x00
-#define RX_FIFO 0x00
-#define RX_STATUS 0x08
-#define TX_STATUS 0x0B
-#define TX_FREE 0x0C /* Remaining free bytes in Tx buffer. */
-
-#define WN0_CONF_CTRL 0x04 /* Window 0: Configuration control register */
-#define WN0_ADDR_CONF 0x06 /* Window 0: Address configuration register */
-#define WN0_IRQ 0x08 /* Window 0: Set IRQ line in bits 12-15. */
-#define WN4_MEDIA 0x0A /* Window 4: Various transcvr/media bits. */
-#define MEDIA_TP 0x00C0 /* Enable link beat and jabber for 10baseT. */
-#define WN4_NETDIAG 0x06 /* Window 4: Net diagnostic */
-#define FD_ENABLE 0x8000 /* Enable full-duplex ("external loopback") */
-
-/*
- * Must be a power of two (we use a binary and in the
- * circular queue)
- */
-#define SKB_QUEUE_SIZE 64
-
-enum el3_cardtype { EL3_ISA, EL3_PNP, EL3_MCA, EL3_EISA };
-
-struct el3_private {
- spinlock_t lock;
- /* skb send-queue */
- int head, size;
- struct sk_buff *queue[SKB_QUEUE_SIZE];
- enum el3_cardtype type;
-};
-static int id_port;
-static int current_tag;
-static struct net_device *el3_devs[EL3_MAX_CARDS];
-
-/* Parameters that may be passed into the module. */
-static int debug = -1;
-static int irq[] = {-1, -1, -1, -1, -1, -1, -1, -1};
-/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
-static int max_interrupt_work = 10;
-#ifdef CONFIG_PNP
-static int nopnp;
-#endif
-
-static int __devinit el3_common_init(struct net_device *dev);
-static void el3_common_remove(struct net_device *dev);
-static ushort id_read_eeprom(int index);
-static ushort read_eeprom(int ioaddr, int index);
-static int el3_open(struct net_device *dev);
-static netdev_tx_t el3_start_xmit(struct sk_buff *skb, struct net_device *dev);
-static irqreturn_t el3_interrupt(int irq, void *dev_id);
-static void update_stats(struct net_device *dev);
-static struct net_device_stats *el3_get_stats(struct net_device *dev);
-static int el3_rx(struct net_device *dev);
-static int el3_close(struct net_device *dev);
-static void set_multicast_list(struct net_device *dev);
-static void el3_tx_timeout (struct net_device *dev);
-static void el3_down(struct net_device *dev);
-static void el3_up(struct net_device *dev);
-static const struct ethtool_ops ethtool_ops;
-#ifdef CONFIG_PM
-static int el3_suspend(struct device *, pm_message_t);
-static int el3_resume(struct device *);
-#else
-#define el3_suspend NULL
-#define el3_resume NULL
-#endif
-
-
-/* generic device remove for all device types */
-static int el3_device_remove (struct device *device);
-#ifdef CONFIG_NET_POLL_CONTROLLER
-static void el3_poll_controller(struct net_device *dev);
-#endif
-
-/* Return 0 on success, 1 on error, 2 when found already detected PnP card */
-static int el3_isa_id_sequence(__be16 *phys_addr)
-{
- short lrs_state = 0xff;
- int i;
-
- /* ISA boards are detected by sending the ID sequence to the
- ID_PORT. We find cards past the first by setting the 'current_tag'
- on cards as they are found. Cards with their tag set will not
- respond to subsequent ID sequences. */
-
- outb(0x00, id_port);
- outb(0x00, id_port);
- for (i = 0; i < 255; i++) {
- outb(lrs_state, id_port);
- lrs_state <<= 1;
- lrs_state = lrs_state & 0x100 ? lrs_state ^ 0xcf : lrs_state;
- }
- /* For the first probe, clear all board's tag registers. */
- if (current_tag == 0)
- outb(0xd0, id_port);
- else /* Otherwise kill off already-found boards. */
- outb(0xd8, id_port);
- if (id_read_eeprom(7) != 0x6d50)
- return 1;
- /* Read in EEPROM data, which does contention-select.
- Only the lowest address board will stay "on-line".
- 3Com got the byte order backwards. */
- for (i = 0; i < 3; i++)
- phys_addr[i] = htons(id_read_eeprom(i));
-#ifdef CONFIG_PNP
- if (!nopnp) {
- /* The ISA PnP 3c509 cards respond to the ID sequence too.
- This check is needed in order not to register them twice. */
- for (i = 0; i < el3_cards; i++) {
- struct el3_private *lp = netdev_priv(el3_devs[i]);
- if (lp->type == EL3_PNP &&
- !memcmp(phys_addr, el3_devs[i]->dev_addr,
- ETH_ALEN)) {
- if (el3_debug > 3)
- pr_debug("3c509 with address %02x %02x %02x %02x %02x %02x was found by ISAPnP\n",
- phys_addr[0] & 0xff, phys_addr[0] >> 8,
- phys_addr[1] & 0xff, phys_addr[1] >> 8,
- phys_addr[2] & 0xff, phys_addr[2] >> 8);
- /* Set the adaptor tag so that the next card can be found. */
- outb(0xd0 + ++current_tag, id_port);
- return 2;
- }
- }
- }
-#endif /* CONFIG_PNP */
- return 0;
-
-}
-
-static void __devinit el3_dev_fill(struct net_device *dev, __be16 *phys_addr,
- int ioaddr, int irq, int if_port,
- enum el3_cardtype type)
-{
- struct el3_private *lp = netdev_priv(dev);
-
- memcpy(dev->dev_addr, phys_addr, ETH_ALEN);
- dev->base_addr = ioaddr;
- dev->irq = irq;
- dev->if_port = if_port;
- lp->type = type;
-}
-
-static int __devinit el3_isa_match(struct device *pdev,
- unsigned int ndev)
-{
- struct net_device *dev;
- int ioaddr, isa_irq, if_port, err;
- unsigned int iobase;
- __be16 phys_addr[3];
-
- while ((err = el3_isa_id_sequence(phys_addr)) == 2)
- ; /* Skip to next card when PnP card found */
- if (err == 1)
- return 0;
-
- iobase = id_read_eeprom(8);
- if_port = iobase >> 14;
- ioaddr = 0x200 + ((iobase & 0x1f) << 4);
- if (irq[el3_cards] > 1 && irq[el3_cards] < 16)
- isa_irq = irq[el3_cards];
- else
- isa_irq = id_read_eeprom(9) >> 12;
-
- dev = alloc_etherdev(sizeof(struct el3_private));
- if (!dev)
- return -ENOMEM;
-
- netdev_boot_setup_check(dev);
-
- if (!request_region(ioaddr, EL3_IO_EXTENT, "3c509-isa")) {
- free_netdev(dev);
- return 0;
- }
-
- /* Set the adaptor tag so that the next card can be found. */
- outb(0xd0 + ++current_tag, id_port);
-
- /* Activate the adaptor at the EEPROM location. */
- outb((ioaddr >> 4) | 0xe0, id_port);
-
- EL3WINDOW(0);
- if (inw(ioaddr) != 0x6d50) {
- free_netdev(dev);
- return 0;
- }
-
- /* Free the interrupt so that some other card can use it. */
- outw(0x0f00, ioaddr + WN0_IRQ);
-
- el3_dev_fill(dev, phys_addr, ioaddr, isa_irq, if_port, EL3_ISA);
- dev_set_drvdata(pdev, dev);
- if (el3_common_init(dev)) {
- free_netdev(dev);
- return 0;
- }
-
- el3_devs[el3_cards++] = dev;
- return 1;
-}
-
-static int __devexit el3_isa_remove(struct device *pdev,
- unsigned int ndev)
-{
- el3_device_remove(pdev);
- dev_set_drvdata(pdev, NULL);
- return 0;
-}
-
-#ifdef CONFIG_PM
-static int el3_isa_suspend(struct device *dev, unsigned int n,
- pm_message_t state)
-{
- current_tag = 0;
- return el3_suspend(dev, state);
-}
-
-static int el3_isa_resume(struct device *dev, unsigned int n)
-{
- struct net_device *ndev = dev_get_drvdata(dev);
- int ioaddr = ndev->base_addr, err;
- __be16 phys_addr[3];
-
- while ((err = el3_isa_id_sequence(phys_addr)) == 2)
- ; /* Skip to next card when PnP card found */
- if (err == 1)
- return 0;
- /* Set the adaptor tag so that the next card can be found. */
- outb(0xd0 + ++current_tag, id_port);
- /* Enable the card */
- outb((ioaddr >> 4) | 0xe0, id_port);
- EL3WINDOW(0);
- if (inw(ioaddr) != 0x6d50)
- return 1;
- /* Free the interrupt so that some other card can use it. */
- outw(0x0f00, ioaddr + WN0_IRQ);
- return el3_resume(dev);
-}
-#endif
-
-static struct isa_driver el3_isa_driver = {
- .match = el3_isa_match,
- .remove = __devexit_p(el3_isa_remove),
-#ifdef CONFIG_PM
- .suspend = el3_isa_suspend,
- .resume = el3_isa_resume,
-#endif
- .driver = {
- .name = "3c509"
- },
-};
-static int isa_registered;
-
-#ifdef CONFIG_PNP
-static struct pnp_device_id el3_pnp_ids[] = {
- { .id = "TCM5090" }, /* 3Com Etherlink III (TP) */
- { .id = "TCM5091" }, /* 3Com Etherlink III */
- { .id = "TCM5094" }, /* 3Com Etherlink III (combo) */
- { .id = "TCM5095" }, /* 3Com Etherlink III (TPO) */
- { .id = "TCM5098" }, /* 3Com Etherlink III (TPC) */
- { .id = "PNP80f7" }, /* 3Com Etherlink III compatible */
- { .id = "PNP80f8" }, /* 3Com Etherlink III compatible */
- { .id = "" }
-};
-MODULE_DEVICE_TABLE(pnp, el3_pnp_ids);
-
-static int __devinit el3_pnp_probe(struct pnp_dev *pdev,
- const struct pnp_device_id *id)
-{
- short i;
- int ioaddr, irq, if_port;
- __be16 phys_addr[3];
- struct net_device *dev = NULL;
- int err;
-
- ioaddr = pnp_port_start(pdev, 0);
- if (!request_region(ioaddr, EL3_IO_EXTENT, "3c509-pnp"))
- return -EBUSY;
- irq = pnp_irq(pdev, 0);
- EL3WINDOW(0);
- for (i = 0; i < 3; i++)
- phys_addr[i] = htons(read_eeprom(ioaddr, i));
- if_port = read_eeprom(ioaddr, 8) >> 14;
- dev = alloc_etherdev(sizeof(struct el3_private));
- if (!dev) {
- release_region(ioaddr, EL3_IO_EXTENT);
- return -ENOMEM;
- }
- SET_NETDEV_DEV(dev, &pdev->dev);
- netdev_boot_setup_check(dev);
-
- el3_dev_fill(dev, phys_addr, ioaddr, irq, if_port, EL3_PNP);
- pnp_set_drvdata(pdev, dev);
- err = el3_common_init(dev);
-
- if (err) {
- pnp_set_drvdata(pdev, NULL);
- free_netdev(dev);
- return err;
- }
-
- el3_devs[el3_cards++] = dev;
- return 0;
-}
-
-static void __devexit el3_pnp_remove(struct pnp_dev *pdev)
-{
- el3_common_remove(pnp_get_drvdata(pdev));
- pnp_set_drvdata(pdev, NULL);
-}
-
-#ifdef CONFIG_PM
-static int el3_pnp_suspend(struct pnp_dev *pdev, pm_message_t state)
-{
- return el3_suspend(&pdev->dev, state);
-}
-
-static int el3_pnp_resume(struct pnp_dev *pdev)
-{
- return el3_resume(&pdev->dev);
-}
-#endif
-
-static struct pnp_driver el3_pnp_driver = {
- .name = "3c509",
- .id_table = el3_pnp_ids,
- .probe = el3_pnp_probe,
- .remove = __devexit_p(el3_pnp_remove),
-#ifdef CONFIG_PM
- .suspend = el3_pnp_suspend,
- .resume = el3_pnp_resume,
-#endif
-};
-static int pnp_registered;
-#endif /* CONFIG_PNP */
-
-#ifdef CONFIG_EISA
-static struct eisa_device_id el3_eisa_ids[] = {
- { "TCM5090" },
- { "TCM5091" },
- { "TCM5092" },
- { "TCM5093" },
- { "TCM5094" },
- { "TCM5095" },
- { "TCM5098" },
- { "" }
-};
-MODULE_DEVICE_TABLE(eisa, el3_eisa_ids);
-
-static int el3_eisa_probe (struct device *device);
-
-static struct eisa_driver el3_eisa_driver = {
- .id_table = el3_eisa_ids,
- .driver = {
- .name = "3c579",
- .probe = el3_eisa_probe,
- .remove = __devexit_p (el3_device_remove),
- .suspend = el3_suspend,
- .resume = el3_resume,
- }
-};
-static int eisa_registered;
-#endif
-
-#ifdef CONFIG_MCA
-static int el3_mca_probe(struct device *dev);
-
-static short el3_mca_adapter_ids[] __initdata = {
- 0x627c,
- 0x627d,
- 0x62db,
- 0x62f6,
- 0x62f7,
- 0x0000
-};
-
-static char *el3_mca_adapter_names[] __initdata = {
- "3Com 3c529 EtherLink III (10base2)",
- "3Com 3c529 EtherLink III (10baseT)",
- "3Com 3c529 EtherLink III (test mode)",
- "3Com 3c529 EtherLink III (TP or coax)",
- "3Com 3c529 EtherLink III (TP)",
- NULL
-};
-
-static struct mca_driver el3_mca_driver = {
- .id_table = el3_mca_adapter_ids,
- .driver = {
- .name = "3c529",
- .bus = &mca_bus_type,
- .probe = el3_mca_probe,
- .remove = __devexit_p(el3_device_remove),
- .suspend = el3_suspend,
- .resume = el3_resume,
- },
-};
-static int mca_registered;
-#endif /* CONFIG_MCA */
-
-static const struct net_device_ops netdev_ops = {
- .ndo_open = el3_open,
- .ndo_stop = el3_close,
- .ndo_start_xmit = el3_start_xmit,
- .ndo_get_stats = el3_get_stats,
- .ndo_set_multicast_list = set_multicast_list,
- .ndo_tx_timeout = el3_tx_timeout,
- .ndo_change_mtu = eth_change_mtu,
- .ndo_set_mac_address = eth_mac_addr,
- .ndo_validate_addr = eth_validate_addr,
-#ifdef CONFIG_NET_POLL_CONTROLLER
- .ndo_poll_controller = el3_poll_controller,
-#endif
-};
-
-static int __devinit el3_common_init(struct net_device *dev)
-{
- struct el3_private *lp = netdev_priv(dev);
- int err;
- const char *if_names[] = {"10baseT", "AUI", "undefined", "BNC"};
-
- spin_lock_init(&lp->lock);
-
- if (dev->mem_start & 0x05) { /* xcvr codes 1/3/4/12 */
- dev->if_port = (dev->mem_start & 0x0f);
- } else { /* xcvr codes 0/8 */
- /* use eeprom value, but save user's full-duplex selection */
- dev->if_port |= (dev->mem_start & 0x08);
- }
-
- /* The EL3-specific entries in the device structure. */
- dev->netdev_ops = &netdev_ops;
- dev->watchdog_timeo = TX_TIMEOUT;
- SET_ETHTOOL_OPS(dev, ðtool_ops);
-
- err = register_netdev(dev);
- if (err) {
- pr_err("Failed to register 3c5x9 at %#3.3lx, IRQ %d.\n",
- dev->base_addr, dev->irq);
- release_region(dev->base_addr, EL3_IO_EXTENT);
- return err;
- }
-
- pr_info("%s: 3c5x9 found at %#3.3lx, %s port, address %pM, IRQ %d.\n",
- dev->name, dev->base_addr, if_names[(dev->if_port & 0x03)],
- dev->dev_addr, dev->irq);
-
- if (el3_debug > 0)
- pr_info("%s", version);
- return 0;
-
-}
-
-static void el3_common_remove (struct net_device *dev)
-{
- unregister_netdev (dev);
- release_region(dev->base_addr, EL3_IO_EXTENT);
- free_netdev (dev);
-}
-
-#ifdef CONFIG_MCA
-static int __init el3_mca_probe(struct device *device)
-{
- /* Based on Erik Nygren's (nygren@mit.edu) 3c529 patch,
- * heavily modified by Chris Beauregard
- * (cpbeaure@csclub.uwaterloo.ca) to support standard MCA
- * probing.
- *
- * redone for multi-card detection by ZP Gu (zpg@castle.net)
- * now works as a module */
-
- short i;
- int ioaddr, irq, if_port;
- __be16 phys_addr[3];
- struct net_device *dev = NULL;
- u_char pos4, pos5;
- struct mca_device *mdev = to_mca_device(device);
- int slot = mdev->slot;
- int err;
-
- pos4 = mca_device_read_stored_pos(mdev, 4);
- pos5 = mca_device_read_stored_pos(mdev, 5);
-
- ioaddr = ((short)((pos4&0xfc)|0x02)) << 8;
- irq = pos5 & 0x0f;
-
-
- pr_info("3c529: found %s at slot %d\n",
- el3_mca_adapter_names[mdev->index], slot + 1);
-
- /* claim the slot */
- strncpy(mdev->name, el3_mca_adapter_names[mdev->index],
- sizeof(mdev->name));
- mca_device_set_claim(mdev, 1);
-
- if_port = pos4 & 0x03;
-
- irq = mca_device_transform_irq(mdev, irq);
- ioaddr = mca_device_transform_ioport(mdev, ioaddr);
- if (el3_debug > 2) {
- pr_debug("3c529: irq %d ioaddr 0x%x ifport %d\n", irq, ioaddr, if_port);
- }
- EL3WINDOW(0);
- for (i = 0; i < 3; i++)
- phys_addr[i] = htons(read_eeprom(ioaddr, i));
-
- dev = alloc_etherdev(sizeof (struct el3_private));
- if (dev == NULL) {
- release_region(ioaddr, EL3_IO_EXTENT);
- return -ENOMEM;
- }
-
- netdev_boot_setup_check(dev);
-
- el3_dev_fill(dev, phys_addr, ioaddr, irq, if_port, EL3_MCA);
- dev_set_drvdata(device, dev);
- err = el3_common_init(dev);
-
- if (err) {
- dev_set_drvdata(device, NULL);
- free_netdev(dev);
- return -ENOMEM;
- }
-
- el3_devs[el3_cards++] = dev;
- return 0;
-}
-
-#endif /* CONFIG_MCA */
-
-#ifdef CONFIG_EISA
-static int __init el3_eisa_probe (struct device *device)
-{
- short i;
- int ioaddr, irq, if_port;
- __be16 phys_addr[3];
- struct net_device *dev = NULL;
- struct eisa_device *edev;
- int err;
-
- /* Yeepee, The driver framework is calling us ! */
- edev = to_eisa_device (device);
- ioaddr = edev->base_addr;
-
- if (!request_region(ioaddr, EL3_IO_EXTENT, "3c579-eisa"))
- return -EBUSY;
-
- /* Change the register set to the configuration window 0. */
- outw(SelectWindow | 0, ioaddr + 0xC80 + EL3_CMD);
-
- irq = inw(ioaddr + WN0_IRQ) >> 12;
- if_port = inw(ioaddr + 6)>>14;
- for (i = 0; i < 3; i++)
- phys_addr[i] = htons(read_eeprom(ioaddr, i));
-
- /* Restore the "Product ID" to the EEPROM read register. */
- read_eeprom(ioaddr, 3);
-
- dev = alloc_etherdev(sizeof (struct el3_private));
- if (dev == NULL) {
- release_region(ioaddr, EL3_IO_EXTENT);
- return -ENOMEM;
- }
-
- netdev_boot_setup_check(dev);
-
- el3_dev_fill(dev, phys_addr, ioaddr, irq, if_port, EL3_EISA);
- eisa_set_drvdata (edev, dev);
- err = el3_common_init(dev);
-
- if (err) {
- eisa_set_drvdata (edev, NULL);
- free_netdev(dev);
- return err;
- }
-
- el3_devs[el3_cards++] = dev;
- return 0;
-}
-#endif
-
-/* This remove works for all device types.
- *
- * The net dev must be stored in the driver data field */
-static int __devexit el3_device_remove (struct device *device)
-{
- struct net_device *dev;
-
- dev = dev_get_drvdata(device);
-
- el3_common_remove (dev);
- return 0;
-}
-
-/* Read a word from the EEPROM using the regular EEPROM access register.
- Assume that we are in register window zero.
- */
-static ushort read_eeprom(int ioaddr, int index)
-{
- outw(EEPROM_READ + index, ioaddr + 10);
- /* Pause for at least 162 us. for the read to take place.
- Some chips seem to require much longer */
- mdelay(2);
- return inw(ioaddr + 12);
-}
-
-/* Read a word from the EEPROM when in the ISA ID probe state. */
-static ushort id_read_eeprom(int index)
-{
- int bit, word = 0;
-
- /* Issue read command, and pause for at least 162 us. for it to complete.
- Assume extra-fast 16Mhz bus. */
- outb(EEPROM_READ + index, id_port);
-
- /* Pause for at least 162 us. for the read to take place. */
- /* Some chips seem to require much longer */
- mdelay(4);
-
- for (bit = 15; bit >= 0; bit--)
- word = (word << 1) + (inb(id_port) & 0x01);
-
- if (el3_debug > 3)
- pr_debug(" 3c509 EEPROM word %d %#4.4x.\n", index, word);
-
- return word;
-}
-
-
-static int
-el3_open(struct net_device *dev)
-{
- int ioaddr = dev->base_addr;
- int i;
-
- outw(TxReset, ioaddr + EL3_CMD);
- outw(RxReset, ioaddr + EL3_CMD);
- outw(SetStatusEnb | 0x00, ioaddr + EL3_CMD);
-
- i = request_irq(dev->irq, el3_interrupt, 0, dev->name, dev);
- if (i)
- return i;
-
- EL3WINDOW(0);
- if (el3_debug > 3)
- pr_debug("%s: Opening, IRQ %d status@%x %4.4x.\n", dev->name,
- dev->irq, ioaddr + EL3_STATUS, inw(ioaddr + EL3_STATUS));
-
- el3_up(dev);
-
- if (el3_debug > 3)
- pr_debug("%s: Opened 3c509 IRQ %d status %4.4x.\n",
- dev->name, dev->irq, inw(ioaddr + EL3_STATUS));
-
- return 0;
-}
-
-static void
-el3_tx_timeout (struct net_device *dev)
-{
- int ioaddr = dev->base_addr;
-
- /* Transmitter timeout, serious problems. */
- pr_warning("%s: transmit timed out, Tx_status %2.2x status %4.4x Tx FIFO room %d.\n",
- dev->name, inb(ioaddr + TX_STATUS), inw(ioaddr + EL3_STATUS),
- inw(ioaddr + TX_FREE));
- dev->stats.tx_errors++;
- dev->trans_start = jiffies; /* prevent tx timeout */
- /* Issue TX_RESET and TX_START commands. */
- outw(TxReset, ioaddr + EL3_CMD);
- outw(TxEnable, ioaddr + EL3_CMD);
- netif_wake_queue(dev);
-}
-
-
-static netdev_tx_t
-el3_start_xmit(struct sk_buff *skb, struct net_device *dev)
-{
- struct el3_private *lp = netdev_priv(dev);
- int ioaddr = dev->base_addr;
- unsigned long flags;
-
- netif_stop_queue (dev);
-
- dev->stats.tx_bytes += skb->len;
-
- if (el3_debug > 4) {
- pr_debug("%s: el3_start_xmit(length = %u) called, status %4.4x.\n",
- dev->name, skb->len, inw(ioaddr + EL3_STATUS));
- }
-#if 0
-#ifndef final_version
- { /* Error-checking code, delete someday. */
- ushort status = inw(ioaddr + EL3_STATUS);
- if (status & 0x0001 && /* IRQ line active, missed one. */
- inw(ioaddr + EL3_STATUS) & 1) { /* Make sure. */
- pr_debug("%s: Missed interrupt, status then %04x now %04x"
- " Tx %2.2x Rx %4.4x.\n", dev->name, status,
- inw(ioaddr + EL3_STATUS), inb(ioaddr + TX_STATUS),
- inw(ioaddr + RX_STATUS));
- /* Fake interrupt trigger by masking, acknowledge interrupts. */
- outw(SetStatusEnb | 0x00, ioaddr + EL3_CMD);
- outw(AckIntr | IntLatch | TxAvailable | RxEarly | IntReq,
- ioaddr + EL3_CMD);
- outw(SetStatusEnb | 0xff, ioaddr + EL3_CMD);
- }
- }
-#endif
-#endif
- /*
- * We lock the driver against other processors. Note
- * we don't need to lock versus the IRQ as we suspended
- * that. This means that we lose the ability to take
- * an RX during a TX upload. That sucks a bit with SMP
- * on an original 3c509 (2K buffer)
- *
- * Using disable_irq stops us crapping on other
- * time sensitive devices.
- */
-
- spin_lock_irqsave(&lp->lock, flags);
-
- /* Put out the doubleword header... */
- outw(skb->len, ioaddr + TX_FIFO);
- outw(0x00, ioaddr + TX_FIFO);
- /* ... and the packet rounded to a doubleword. */
- outsl(ioaddr + TX_FIFO, skb->data, (skb->len + 3) >> 2);
-
- if (inw(ioaddr + TX_FREE) > 1536)
- netif_start_queue(dev);
- else
- /* Interrupt us when the FIFO has room for max-sized packet. */
- outw(SetTxThreshold + 1536, ioaddr + EL3_CMD);
-
- spin_unlock_irqrestore(&lp->lock, flags);
-
- dev_kfree_skb (skb);
-
- /* Clear the Tx status stack. */
- {
- short tx_status;
- int i = 4;
-
- while (--i > 0 && (tx_status = inb(ioaddr + TX_STATUS)) > 0) {
- if (tx_status & 0x38) dev->stats.tx_aborted_errors++;
- if (tx_status & 0x30) outw(TxReset, ioaddr + EL3_CMD);
- if (tx_status & 0x3C) outw(TxEnable, ioaddr + EL3_CMD);
- outb(0x00, ioaddr + TX_STATUS); /* Pop the status stack. */
- }
- }
- return NETDEV_TX_OK;
-}
-
-/* The EL3 interrupt handler. */
-static irqreturn_t
-el3_interrupt(int irq, void *dev_id)
-{
- struct net_device *dev = dev_id;
- struct el3_private *lp;
- int ioaddr, status;
- int i = max_interrupt_work;
-
- lp = netdev_priv(dev);
- spin_lock(&lp->lock);
-
- ioaddr = dev->base_addr;
-
- if (el3_debug > 4) {
- status = inw(ioaddr + EL3_STATUS);
- pr_debug("%s: interrupt, status %4.4x.\n", dev->name, status);
- }
-
- while ((status = inw(ioaddr + EL3_STATUS)) &
- (IntLatch | RxComplete | StatsFull)) {
-
- if (status & RxComplete)
- el3_rx(dev);
-
- if (status & TxAvailable) {
- if (el3_debug > 5)
- pr_debug(" TX room bit was handled.\n");
- /* There's room in the FIFO for a full-sized packet. */
- outw(AckIntr | TxAvailable, ioaddr + EL3_CMD);
- netif_wake_queue (dev);
- }
- if (status & (AdapterFailure | RxEarly | StatsFull | TxComplete)) {
- /* Handle all uncommon interrupts. */
- if (status & StatsFull) /* Empty statistics. */
- update_stats(dev);
- if (status & RxEarly) { /* Rx early is unused. */
- el3_rx(dev);
- outw(AckIntr | RxEarly, ioaddr + EL3_CMD);
- }
- if (status & TxComplete) { /* Really Tx error. */
- short tx_status;
- int i = 4;
-
- while (--i>0 && (tx_status = inb(ioaddr + TX_STATUS)) > 0) {
- if (tx_status & 0x38) dev->stats.tx_aborted_errors++;
- if (tx_status & 0x30) outw(TxReset, ioaddr + EL3_CMD);
- if (tx_status & 0x3C) outw(TxEnable, ioaddr + EL3_CMD);
- outb(0x00, ioaddr + TX_STATUS); /* Pop the status stack. */
- }
- }
- if (status & AdapterFailure) {
- /* Adapter failure requires Rx reset and reinit. */
- outw(RxReset, ioaddr + EL3_CMD);
- /* Set the Rx filter to the current state. */
- outw(SetRxFilter | RxStation | RxBroadcast
- | (dev->flags & IFF_ALLMULTI ? RxMulticast : 0)
- | (dev->flags & IFF_PROMISC ? RxProm : 0),
- ioaddr + EL3_CMD);
- outw(RxEnable, ioaddr + EL3_CMD); /* Re-enable the receiver. */
- outw(AckIntr | AdapterFailure, ioaddr + EL3_CMD);
- }
- }
-
- if (--i < 0) {
- pr_err("%s: Infinite loop in interrupt, status %4.4x.\n",
- dev->name, status);
- /* Clear all interrupts. */
- outw(AckIntr | 0xFF, ioaddr + EL3_CMD);
- break;
- }
- /* Acknowledge the IRQ. */
- outw(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD); /* Ack IRQ */
- }
-
- if (el3_debug > 4) {
- pr_debug("%s: exiting interrupt, status %4.4x.\n", dev->name,
- inw(ioaddr + EL3_STATUS));
- }
- spin_unlock(&lp->lock);
- return IRQ_HANDLED;
-}
-
-
-#ifdef CONFIG_NET_POLL_CONTROLLER
-/*
- * Polling receive - used by netconsole and other diagnostic tools
- * to allow network i/o with interrupts disabled.
- */
-static void el3_poll_controller(struct net_device *dev)
-{
- disable_irq(dev->irq);
- el3_interrupt(dev->irq, dev);
- enable_irq(dev->irq);
-}
-#endif
-
-static struct net_device_stats *
-el3_get_stats(struct net_device *dev)
-{
- struct el3_private *lp = netdev_priv(dev);
- unsigned long flags;
-
- /*
- * This is fast enough not to bother with disable IRQ
- * stuff.
- */
-
- spin_lock_irqsave(&lp->lock, flags);
- update_stats(dev);
- spin_unlock_irqrestore(&lp->lock, flags);
- return &dev->stats;
-}
-
-/* Update statistics. We change to register window 6, so this should be run
- single-threaded if the device is active. This is expected to be a rare
- operation, and it's simpler for the rest of the driver to assume that
- window 1 is always valid rather than use a special window-state variable.
- */
-static void update_stats(struct net_device *dev)
-{
- int ioaddr = dev->base_addr;
-
- if (el3_debug > 5)
- pr_debug(" Updating the statistics.\n");
- /* Turn off statistics updates while reading. */
- outw(StatsDisable, ioaddr + EL3_CMD);
- /* Switch to the stats window, and read everything. */
- EL3WINDOW(6);
- dev->stats.tx_carrier_errors += inb(ioaddr + 0);
- dev->stats.tx_heartbeat_errors += inb(ioaddr + 1);
- /* Multiple collisions. */ inb(ioaddr + 2);
- dev->stats.collisions += inb(ioaddr + 3);
- dev->stats.tx_window_errors += inb(ioaddr + 4);
- dev->stats.rx_fifo_errors += inb(ioaddr + 5);
- dev->stats.tx_packets += inb(ioaddr + 6);
- /* Rx packets */ inb(ioaddr + 7);
- /* Tx deferrals */ inb(ioaddr + 8);
- inw(ioaddr + 10); /* Total Rx and Tx octets. */
- inw(ioaddr + 12);
-
- /* Back to window 1, and turn statistics back on. */
- EL3WINDOW(1);
- outw(StatsEnable, ioaddr + EL3_CMD);
-}
-
-static int
-el3_rx(struct net_device *dev)
-{
- int ioaddr = dev->base_addr;
- short rx_status;
-
- if (el3_debug > 5)
- pr_debug(" In rx_packet(), status %4.4x, rx_status %4.4x.\n",
- inw(ioaddr+EL3_STATUS), inw(ioaddr+RX_STATUS));
- while ((rx_status = inw(ioaddr + RX_STATUS)) > 0) {
- if (rx_status & 0x4000) { /* Error, update stats. */
- short error = rx_status & 0x3800;
-
- outw(RxDiscard, ioaddr + EL3_CMD);
- dev->stats.rx_errors++;
- switch (error) {
- case 0x0000: dev->stats.rx_over_errors++; break;
- case 0x0800: dev->stats.rx_length_errors++; break;
- case 0x1000: dev->stats.rx_frame_errors++; break;
- case 0x1800: dev->stats.rx_length_errors++; break;
- case 0x2000: dev->stats.rx_frame_errors++; break;
- case 0x2800: dev->stats.rx_crc_errors++; break;
- }
- } else {
- short pkt_len = rx_status & 0x7ff;
- struct sk_buff *skb;
-
- skb = dev_alloc_skb(pkt_len+5);
- if (el3_debug > 4)
- pr_debug("Receiving packet size %d status %4.4x.\n",
- pkt_len, rx_status);
- if (skb != NULL) {
- skb_reserve(skb, 2); /* Align IP on 16 byte */
-
- /* 'skb->data' points to the start of sk_buff data area. */
- insl(ioaddr + RX_FIFO, skb_put(skb,pkt_len),
- (pkt_len + 3) >> 2);
-
- outw(RxDiscard, ioaddr + EL3_CMD); /* Pop top Rx packet. */
- skb->protocol = eth_type_trans(skb,dev);
- netif_rx(skb);
- dev->stats.rx_bytes += pkt_len;
- dev->stats.rx_packets++;
- continue;
- }
- outw(RxDiscard, ioaddr + EL3_CMD);
- dev->stats.rx_dropped++;
- if (el3_debug)
- pr_debug("%s: Couldn't allocate a sk_buff of size %d.\n",
- dev->name, pkt_len);
- }
- inw(ioaddr + EL3_STATUS); /* Delay. */
- while (inw(ioaddr + EL3_STATUS) & 0x1000)
- pr_debug(" Waiting for 3c509 to discard packet, status %x.\n",
- inw(ioaddr + EL3_STATUS) );
- }
-
- return 0;
-}
-
-/*
- * Set or clear the multicast filter for this adaptor.
- */
-static void
-set_multicast_list(struct net_device *dev)
-{
- unsigned long flags;
- struct el3_private *lp = netdev_priv(dev);
- int ioaddr = dev->base_addr;
- int mc_count = netdev_mc_count(dev);
-
- if (el3_debug > 1) {
- static int old;
- if (old != mc_count) {
- old = mc_count;
- pr_debug("%s: Setting Rx mode to %d addresses.\n",
- dev->name, mc_count);
- }
- }
- spin_lock_irqsave(&lp->lock, flags);
- if (dev->flags&IFF_PROMISC) {
- outw(SetRxFilter | RxStation | RxMulticast | RxBroadcast | RxProm,
- ioaddr + EL3_CMD);
- }
- else if (mc_count || (dev->flags&IFF_ALLMULTI)) {
- outw(SetRxFilter | RxStation | RxMulticast | RxBroadcast, ioaddr + EL3_CMD);
- }
- else
- outw(SetRxFilter | RxStation | RxBroadcast, ioaddr + EL3_CMD);
- spin_unlock_irqrestore(&lp->lock, flags);
-}
-
-static int
-el3_close(struct net_device *dev)
-{
- int ioaddr = dev->base_addr;
- struct el3_private *lp = netdev_priv(dev);
-
- if (el3_debug > 2)
- pr_debug("%s: Shutting down ethercard.\n", dev->name);
-
- el3_down(dev);
-
- free_irq(dev->irq, dev);
- /* Switching back to window 0 disables the IRQ. */
- EL3WINDOW(0);
- if (lp->type != EL3_EISA) {
- /* But we explicitly zero the IRQ line select anyway. Don't do
- * it on EISA cards, it prevents the module from getting an
- * IRQ after unload+reload... */
- outw(0x0f00, ioaddr + WN0_IRQ);
- }
-
- return 0;
-}
-
-static int
-el3_link_ok(struct net_device *dev)
-{
- int ioaddr = dev->base_addr;
- u16 tmp;
-
- EL3WINDOW(4);
- tmp = inw(ioaddr + WN4_MEDIA);
- EL3WINDOW(1);
- return tmp & (1<<11);
-}
-
-static int
-el3_netdev_get_ecmd(struct net_device *dev, struct ethtool_cmd *ecmd)
-{
- u16 tmp;
- int ioaddr = dev->base_addr;
-
- EL3WINDOW(0);
- /* obtain current transceiver via WN4_MEDIA? */
- tmp = inw(ioaddr + WN0_ADDR_CONF);
- ecmd->transceiver = XCVR_INTERNAL;
- switch (tmp >> 14) {
- case 0:
- ecmd->port = PORT_TP;
- break;
- case 1:
- ecmd->port = PORT_AUI;
- ecmd->transceiver = XCVR_EXTERNAL;
- break;
- case 3:
- ecmd->port = PORT_BNC;
- default:
- break;
- }
-
- ecmd->duplex = DUPLEX_HALF;
- ecmd->supported = 0;
- tmp = inw(ioaddr + WN0_CONF_CTRL);
- if (tmp & (1<<13))
- ecmd->supported |= SUPPORTED_AUI;
- if (tmp & (1<<12))
- ecmd->supported |= SUPPORTED_BNC;
- if (tmp & (1<<9)) {
- ecmd->supported |= SUPPORTED_TP | SUPPORTED_10baseT_Half |
- SUPPORTED_10baseT_Full; /* hmm... */
- EL3WINDOW(4);
- tmp = inw(ioaddr + WN4_NETDIAG);
- if (tmp & FD_ENABLE)
- ecmd->duplex = DUPLEX_FULL;
- }
-
- ethtool_cmd_speed_set(ecmd, SPEED_10);
- EL3WINDOW(1);
- return 0;
-}
-
-static int
-el3_netdev_set_ecmd(struct net_device *dev, struct ethtool_cmd *ecmd)
-{
- u16 tmp;
- int ioaddr = dev->base_addr;
-
- if (ecmd->speed != SPEED_10)
- return -EINVAL;
- if ((ecmd->duplex != DUPLEX_HALF) && (ecmd->duplex != DUPLEX_FULL))
- return -EINVAL;
- if ((ecmd->transceiver != XCVR_INTERNAL) && (ecmd->transceiver != XCVR_EXTERNAL))
- return -EINVAL;
-
- /* change XCVR type */
- EL3WINDOW(0);
- tmp = inw(ioaddr + WN0_ADDR_CONF);
- switch (ecmd->port) {
- case PORT_TP:
- tmp &= ~(3<<14);
- dev->if_port = 0;
- break;
- case PORT_AUI:
- tmp |= (1<<14);
- dev->if_port = 1;
- break;
- case PORT_BNC:
- tmp |= (3<<14);
- dev->if_port = 3;
- break;
- default:
- return -EINVAL;
- }
-
- outw(tmp, ioaddr + WN0_ADDR_CONF);
- if (dev->if_port == 3) {
- /* fire up the DC-DC convertor if BNC gets enabled */
- tmp = inw(ioaddr + WN0_ADDR_CONF);
- if (tmp & (3 << 14)) {
- outw(StartCoax, ioaddr + EL3_CMD);
- udelay(800);
- } else
- return -EIO;
- }
-
- EL3WINDOW(4);
- tmp = inw(ioaddr + WN4_NETDIAG);
- if (ecmd->duplex == DUPLEX_FULL)
- tmp |= FD_ENABLE;
- else
- tmp &= ~FD_ENABLE;
- outw(tmp, ioaddr + WN4_NETDIAG);
- EL3WINDOW(1);
-
- return 0;
-}
-
-static void el3_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
-{
- strcpy(info->driver, DRV_NAME);
- strcpy(info->version, DRV_VERSION);
-}
-
-static int el3_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
-{
- struct el3_private *lp = netdev_priv(dev);
- int ret;
-
- spin_lock_irq(&lp->lock);
- ret = el3_netdev_get_ecmd(dev, ecmd);
- spin_unlock_irq(&lp->lock);
- return ret;
-}
-
-static int el3_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
-{
- struct el3_private *lp = netdev_priv(dev);
- int ret;
-
- spin_lock_irq(&lp->lock);
- ret = el3_netdev_set_ecmd(dev, ecmd);
- spin_unlock_irq(&lp->lock);
- return ret;
-}
-
-static u32 el3_get_link(struct net_device *dev)
-{
- struct el3_private *lp = netdev_priv(dev);
- u32 ret;
-
- spin_lock_irq(&lp->lock);
- ret = el3_link_ok(dev);
- spin_unlock_irq(&lp->lock);
- return ret;
-}
-
-static u32 el3_get_msglevel(struct net_device *dev)
-{
- return el3_debug;
-}
-
-static void el3_set_msglevel(struct net_device *dev, u32 v)
-{
- el3_debug = v;
-}
-
-static const struct ethtool_ops ethtool_ops = {
- .get_drvinfo = el3_get_drvinfo,
- .get_settings = el3_get_settings,
- .set_settings = el3_set_settings,
- .get_link = el3_get_link,
- .get_msglevel = el3_get_msglevel,
- .set_msglevel = el3_set_msglevel,
-};
-
-static void
-el3_down(struct net_device *dev)
-{
- int ioaddr = dev->base_addr;
-
- netif_stop_queue(dev);
-
- /* Turn off statistics ASAP. We update lp->stats below. */
- outw(StatsDisable, ioaddr + EL3_CMD);
-
- /* Disable the receiver and transmitter. */
- outw(RxDisable, ioaddr + EL3_CMD);
- outw(TxDisable, ioaddr + EL3_CMD);
-
- if (dev->if_port == 3)
- /* Turn off thinnet power. Green! */
- outw(StopCoax, ioaddr + EL3_CMD);
- else if (dev->if_port == 0) {
- /* Disable link beat and jabber, if_port may change here next open(). */
- EL3WINDOW(4);
- outw(inw(ioaddr + WN4_MEDIA) & ~MEDIA_TP, ioaddr + WN4_MEDIA);
- }
-
- outw(SetIntrEnb | 0x0000, ioaddr + EL3_CMD);
-
- update_stats(dev);
-}
-
-static void
-el3_up(struct net_device *dev)
-{
- int i, sw_info, net_diag;
- int ioaddr = dev->base_addr;
-
- /* Activating the board required and does no harm otherwise */
- outw(0x0001, ioaddr + 4);
-
- /* Set the IRQ line. */
- outw((dev->irq << 12) | 0x0f00, ioaddr + WN0_IRQ);
-
- /* Set the station address in window 2 each time opened. */
- EL3WINDOW(2);
-
- for (i = 0; i < 6; i++)
- outb(dev->dev_addr[i], ioaddr + i);
-
- if ((dev->if_port & 0x03) == 3) /* BNC interface */
- /* Start the thinnet transceiver. We should really wait 50ms...*/
- outw(StartCoax, ioaddr + EL3_CMD);
- else if ((dev->if_port & 0x03) == 0) { /* 10baseT interface */
- /* Combine secondary sw_info word (the adapter level) and primary
- sw_info word (duplex setting plus other useless bits) */
- EL3WINDOW(0);
- sw_info = (read_eeprom(ioaddr, 0x14) & 0x400f) |
- (read_eeprom(ioaddr, 0x0d) & 0xBff0);
-
- EL3WINDOW(4);
- net_diag = inw(ioaddr + WN4_NETDIAG);
- net_diag = (net_diag | FD_ENABLE); /* temporarily assume full-duplex will be set */
- pr_info("%s: ", dev->name);
- switch (dev->if_port & 0x0c) {
- case 12:
- /* force full-duplex mode if 3c5x9b */
- if (sw_info & 0x000f) {
- pr_cont("Forcing 3c5x9b full-duplex mode");
- break;
- }
- case 8:
- /* set full-duplex mode based on eeprom config setting */
- if ((sw_info & 0x000f) && (sw_info & 0x8000)) {
- pr_cont("Setting 3c5x9b full-duplex mode (from EEPROM configuration bit)");
- break;
- }
- default:
- /* xcvr=(0 || 4) OR user has an old 3c5x9 non "B" model */
- pr_cont("Setting 3c5x9/3c5x9B half-duplex mode");
- net_diag = (net_diag & ~FD_ENABLE); /* disable full duplex */
- }
-
- outw(net_diag, ioaddr + WN4_NETDIAG);
- pr_cont(" if_port: %d, sw_info: %4.4x\n", dev->if_port, sw_info);
- if (el3_debug > 3)
- pr_debug("%s: 3c5x9 net diag word is now: %4.4x.\n", dev->name, net_diag);
- /* Enable link beat and jabber check. */
- outw(inw(ioaddr + WN4_MEDIA) | MEDIA_TP, ioaddr + WN4_MEDIA);
- }
-
- /* Switch to the stats window, and clear all stats by reading. */
- outw(StatsDisable, ioaddr + EL3_CMD);
- EL3WINDOW(6);
- for (i = 0; i < 9; i++)
- inb(ioaddr + i);
- inw(ioaddr + 10);
- inw(ioaddr + 12);
-
- /* Switch to register set 1 for normal use. */
- EL3WINDOW(1);
-
- /* Accept b-case and phys addr only. */
- outw(SetRxFilter | RxStation | RxBroadcast, ioaddr + EL3_CMD);
- outw(StatsEnable, ioaddr + EL3_CMD); /* Turn on statistics. */
-
- outw(RxEnable, ioaddr + EL3_CMD); /* Enable the receiver. */
- outw(TxEnable, ioaddr + EL3_CMD); /* Enable transmitter. */
- /* Allow status bits to be seen. */
- outw(SetStatusEnb | 0xff, ioaddr + EL3_CMD);
- /* Ack all pending events, and set active indicator mask. */
- outw(AckIntr | IntLatch | TxAvailable | RxEarly | IntReq,
- ioaddr + EL3_CMD);
- outw(SetIntrEnb | IntLatch|TxAvailable|TxComplete|RxComplete|StatsFull,
- ioaddr + EL3_CMD);
-
- netif_start_queue(dev);
-}
-
-/* Power Management support functions */
-#ifdef CONFIG_PM
-
-static int
-el3_suspend(struct device *pdev, pm_message_t state)
-{
- unsigned long flags;
- struct net_device *dev;
- struct el3_private *lp;
- int ioaddr;
-
- dev = dev_get_drvdata(pdev);
- lp = netdev_priv(dev);
- ioaddr = dev->base_addr;
-
- spin_lock_irqsave(&lp->lock, flags);
-
- if (netif_running(dev))
- netif_device_detach(dev);
-
- el3_down(dev);
- outw(PowerDown, ioaddr + EL3_CMD);
-
- spin_unlock_irqrestore(&lp->lock, flags);
- return 0;
-}
-
-static int
-el3_resume(struct device *pdev)
-{
- unsigned long flags;
- struct net_device *dev;
- struct el3_private *lp;
- int ioaddr;
-
- dev = dev_get_drvdata(pdev);
- lp = netdev_priv(dev);
- ioaddr = dev->base_addr;
-
- spin_lock_irqsave(&lp->lock, flags);
-
- outw(PowerUp, ioaddr + EL3_CMD);
- EL3WINDOW(0);
- el3_up(dev);
-
- if (netif_running(dev))
- netif_device_attach(dev);
-
- spin_unlock_irqrestore(&lp->lock, flags);
- return 0;
-}
-
-#endif /* CONFIG_PM */
-
-module_param(debug,int, 0);
-module_param_array(irq, int, NULL, 0);
-module_param(max_interrupt_work, int, 0);
-MODULE_PARM_DESC(debug, "debug level (0-6)");
-MODULE_PARM_DESC(irq, "IRQ number(s) (assigned)");
-MODULE_PARM_DESC(max_interrupt_work, "maximum events handled per interrupt");
-#ifdef CONFIG_PNP
-module_param(nopnp, int, 0);
-MODULE_PARM_DESC(nopnp, "disable ISA PnP support (0-1)");
-#endif /* CONFIG_PNP */
-MODULE_DESCRIPTION("3Com Etherlink III (3c509, 3c509B, 3c529, 3c579) ethernet driver");
-MODULE_LICENSE("GPL");
-
-static int __init el3_init_module(void)
-{
- int ret = 0;
-
- if (debug >= 0)
- el3_debug = debug;
-
-#ifdef CONFIG_PNP
- if (!nopnp) {
- ret = pnp_register_driver(&el3_pnp_driver);
- if (!ret)
- pnp_registered = 1;
- }
-#endif
- /* Select an open I/O location at 0x1*0 to do ISA contention select. */
- /* Start with 0x110 to avoid some sound cards.*/
- for (id_port = 0x110 ; id_port < 0x200; id_port += 0x10) {
- if (!request_region(id_port, 1, "3c509-control"))
- continue;
- outb(0x00, id_port);
- outb(0xff, id_port);
- if (inb(id_port) & 0x01)
- break;
- else
- release_region(id_port, 1);
- }
- if (id_port >= 0x200) {
- id_port = 0;
- pr_err("No I/O port available for 3c509 activation.\n");
- } else {
- ret = isa_register_driver(&el3_isa_driver, EL3_MAX_CARDS);
- if (!ret)
- isa_registered = 1;
- }
-#ifdef CONFIG_EISA
- ret = eisa_driver_register(&el3_eisa_driver);
- if (!ret)
- eisa_registered = 1;
-#endif
-#ifdef CONFIG_MCA
- ret = mca_register_driver(&el3_mca_driver);
- if (!ret)
- mca_registered = 1;
-#endif
-
-#ifdef CONFIG_PNP
- if (pnp_registered)
- ret = 0;
-#endif
- if (isa_registered)
- ret = 0;
-#ifdef CONFIG_EISA
- if (eisa_registered)
- ret = 0;
-#endif
-#ifdef CONFIG_MCA
- if (mca_registered)
- ret = 0;
-#endif
- return ret;
-}
-
-static void __exit el3_cleanup_module(void)
-{
-#ifdef CONFIG_PNP
- if (pnp_registered)
- pnp_unregister_driver(&el3_pnp_driver);
-#endif
- if (isa_registered)
- isa_unregister_driver(&el3_isa_driver);
- if (id_port)
- release_region(id_port, 1);
-#ifdef CONFIG_EISA
- if (eisa_registered)
- eisa_driver_unregister(&el3_eisa_driver);
-#endif
-#ifdef CONFIG_MCA
- if (mca_registered)
- mca_unregister_driver(&el3_mca_driver);
-#endif
-}
-
-module_init (el3_init_module);
-module_exit (el3_cleanup_module);
+++ /dev/null
-/*
- Written 1997-1998 by Donald Becker.
-
- This software may be used and distributed according to the terms
- of the GNU General Public License, incorporated herein by reference.
-
- This driver is for the 3Com ISA EtherLink XL "Corkscrew" 3c515 ethercard.
-
- The author may be reached as becker@scyld.com, or C/O
- Scyld Computing Corporation
- 410 Severn Ave., Suite 210
- Annapolis MD 21403
-
-
- 2000/2/2- Added support for kernel-level ISAPnP
- by Stephen Frost <sfrost@snowman.net> and Alessandro Zummo
- Cleaned up for 2.3.x/softnet by Jeff Garzik and Alan Cox.
-
- 2001/11/17 - Added ethtool support (jgarzik)
-
- 2002/10/28 - Locking updates for 2.5 (alan@lxorguk.ukuu.org.uk)
-
-*/
-
-#define DRV_NAME "3c515"
-#define DRV_VERSION "0.99t-ac"
-#define DRV_RELDATE "28-Oct-2002"
-
-static char *version =
-DRV_NAME ".c:v" DRV_VERSION " " DRV_RELDATE " becker@scyld.com and others\n";
-
-#define CORKSCREW 1
-
-/* "Knobs" that adjust features and parameters. */
-/* Set the copy breakpoint for the copy-only-tiny-frames scheme.
- Setting to > 1512 effectively disables this feature. */
-static int rx_copybreak = 200;
-
-/* Allow setting MTU to a larger size, bypassing the normal ethernet setup. */
-static const int mtu = 1500;
-
-/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
-static int max_interrupt_work = 20;
-
-/* Enable the automatic media selection code -- usually set. */
-#define AUTOMEDIA 1
-
-/* Allow the use of fragment bus master transfers instead of only
- programmed-I/O for Vortex cards. Full-bus-master transfers are always
- enabled by default on Boomerang cards. If VORTEX_BUS_MASTER is defined,
- the feature may be turned on using 'options'. */
-#define VORTEX_BUS_MASTER
-
-/* A few values that may be tweaked. */
-/* Keep the ring sizes a power of two for efficiency. */
-#define TX_RING_SIZE 16
-#define RX_RING_SIZE 16
-#define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer. */
-
-#include <linux/module.h>
-#include <linux/isapnp.h>
-#include <linux/kernel.h>
-#include <linux/netdevice.h>
-#include <linux/string.h>
-#include <linux/errno.h>
-#include <linux/in.h>
-#include <linux/ioport.h>
-#include <linux/skbuff.h>
-#include <linux/etherdevice.h>
-#include <linux/interrupt.h>
-#include <linux/timer.h>
-#include <linux/ethtool.h>
-#include <linux/bitops.h>
-
-#include <asm/uaccess.h>
-#include <asm/io.h>
-#include <asm/dma.h>
-
-#define NEW_MULTICAST
-#include <linux/delay.h>
-
-#define MAX_UNITS 8
-
-MODULE_AUTHOR("Donald Becker <becker@scyld.com>");
-MODULE_DESCRIPTION("3Com 3c515 Corkscrew driver");
-MODULE_LICENSE("GPL");
-MODULE_VERSION(DRV_VERSION);
-
-/* "Knobs" for adjusting internal parameters. */
-/* Put out somewhat more debugging messages. (0 - no msg, 1 minimal msgs). */
-#define DRIVER_DEBUG 1
-/* Some values here only for performance evaluation and path-coverage
- debugging. */
-static int rx_nocopy, rx_copy, queued_packet;
-
-/* Number of times to check to see if the Tx FIFO has space, used in some
- limited cases. */
-#define WAIT_TX_AVAIL 200
-
-/* Operational parameter that usually are not changed. */
-#define TX_TIMEOUT ((4*HZ)/10) /* Time in jiffies before concluding Tx hung */
-
-/* The size here is somewhat misleading: the Corkscrew also uses the ISA
- aliased registers at <base>+0x400.
- */
-#define CORKSCREW_TOTAL_SIZE 0x20
-
-#ifdef DRIVER_DEBUG
-static int corkscrew_debug = DRIVER_DEBUG;
-#else
-static int corkscrew_debug = 1;
-#endif
-
-#define CORKSCREW_ID 10
-
-/*
- Theory of Operation
-
-I. Board Compatibility
-
-This device driver is designed for the 3Com 3c515 ISA Fast EtherLink XL,
-3Com's ISA bus adapter for Fast Ethernet. Due to the unique I/O port layout,
-it's not practical to integrate this driver with the other EtherLink drivers.
-
-II. Board-specific settings
-
-The Corkscrew has an EEPROM for configuration, but no special settings are
-needed for Linux.
-
-III. Driver operation
-
-The 3c515 series use an interface that's very similar to the 3c900 "Boomerang"
-PCI cards, with the bus master interface extensively modified to work with
-the ISA bus.
-
-The card is capable of full-bus-master transfers with separate
-lists of transmit and receive descriptors, similar to the AMD LANCE/PCnet,
-DEC Tulip and Intel Speedo3.
-
-This driver uses a "RX_COPYBREAK" scheme rather than a fixed intermediate
-receive buffer. This scheme allocates full-sized skbuffs as receive
-buffers. The value RX_COPYBREAK is used as the copying breakpoint: it is
-chosen to trade-off the memory wasted by passing the full-sized skbuff to
-the queue layer for all frames vs. the copying cost of copying a frame to a
-correctly-sized skbuff.
-
-
-IIIC. Synchronization
-The driver runs as two independent, single-threaded flows of control. One
-is the send-packet routine, which enforces single-threaded use by the netif
-layer. The other thread is the interrupt handler, which is single
-threaded by the hardware and other software.
-
-IV. Notes
-
-Thanks to Terry Murphy of 3Com for providing documentation and a development
-board.
-
-The names "Vortex", "Boomerang" and "Corkscrew" are the internal 3Com
-project names. I use these names to eliminate confusion -- 3Com product
-numbers and names are very similar and often confused.
-
-The new chips support both ethernet (1.5K) and FDDI (4.5K) frame sizes!
-This driver only supports ethernet frames because of the recent MTU limit
-of 1.5K, but the changes to support 4.5K are minimal.
-*/
-
-/* Operational definitions.
- These are not used by other compilation units and thus are not
- exported in a ".h" file.
-
- First the windows. There are eight register windows, with the command
- and status registers available in each.
- */
-#define EL3WINDOW(win_num) outw(SelectWindow + (win_num), ioaddr + EL3_CMD)
-#define EL3_CMD 0x0e
-#define EL3_STATUS 0x0e
-
-/* The top five bits written to EL3_CMD are a command, the lower
- 11 bits are the parameter, if applicable.
- Note that 11 parameters bits was fine for ethernet, but the new chips
- can handle FDDI length frames (~4500 octets) and now parameters count
- 32-bit 'Dwords' rather than octets. */
-
-enum corkscrew_cmd {
- TotalReset = 0 << 11, SelectWindow = 1 << 11, StartCoax = 2 << 11,
- RxDisable = 3 << 11, RxEnable = 4 << 11, RxReset = 5 << 11,
- UpStall = 6 << 11, UpUnstall = (6 << 11) + 1, DownStall = (6 << 11) + 2,
- DownUnstall = (6 << 11) + 3, RxDiscard = 8 << 11, TxEnable = 9 << 11,
- TxDisable = 10 << 11, TxReset = 11 << 11, FakeIntr = 12 << 11,
- AckIntr = 13 << 11, SetIntrEnb = 14 << 11, SetStatusEnb = 15 << 11,
- SetRxFilter = 16 << 11, SetRxThreshold = 17 << 11,
- SetTxThreshold = 18 << 11, SetTxStart = 19 << 11, StartDMAUp = 20 << 11,
- StartDMADown = (20 << 11) + 1, StatsEnable = 21 << 11,
- StatsDisable = 22 << 11, StopCoax = 23 << 11,
-};
-
-/* The SetRxFilter command accepts the following classes: */
-enum RxFilter {
- RxStation = 1, RxMulticast = 2, RxBroadcast = 4, RxProm = 8
-};
-
-/* Bits in the general status register. */
-enum corkscrew_status {
- IntLatch = 0x0001, AdapterFailure = 0x0002, TxComplete = 0x0004,
- TxAvailable = 0x0008, RxComplete = 0x0010, RxEarly = 0x0020,
- IntReq = 0x0040, StatsFull = 0x0080,
- DMADone = 1 << 8, DownComplete = 1 << 9, UpComplete = 1 << 10,
- DMAInProgress = 1 << 11, /* DMA controller is still busy. */
- CmdInProgress = 1 << 12, /* EL3_CMD is still busy. */
-};
-
-/* Register window 1 offsets, the window used in normal operation.
- On the Corkscrew this window is always mapped at offsets 0x10-0x1f. */
-enum Window1 {
- TX_FIFO = 0x10, RX_FIFO = 0x10, RxErrors = 0x14,
- RxStatus = 0x18, Timer = 0x1A, TxStatus = 0x1B,
- TxFree = 0x1C, /* Remaining free bytes in Tx buffer. */
-};
-enum Window0 {
- Wn0IRQ = 0x08,
-#if defined(CORKSCREW)
- Wn0EepromCmd = 0x200A, /* Corkscrew EEPROM command register. */
- Wn0EepromData = 0x200C, /* Corkscrew EEPROM results register. */
-#else
- Wn0EepromCmd = 10, /* Window 0: EEPROM command register. */
- Wn0EepromData = 12, /* Window 0: EEPROM results register. */
-#endif
-};
-enum Win0_EEPROM_bits {
- EEPROM_Read = 0x80, EEPROM_WRITE = 0x40, EEPROM_ERASE = 0xC0,
- EEPROM_EWENB = 0x30, /* Enable erasing/writing for 10 msec. */
- EEPROM_EWDIS = 0x00, /* Disable EWENB before 10 msec timeout. */
-};
-
-/* EEPROM locations. */
-enum eeprom_offset {
- PhysAddr01 = 0, PhysAddr23 = 1, PhysAddr45 = 2, ModelID = 3,
- EtherLink3ID = 7,
-};
-
-enum Window3 { /* Window 3: MAC/config bits. */
- Wn3_Config = 0, Wn3_MAC_Ctrl = 6, Wn3_Options = 8,
-};
-enum wn3_config {
- Ram_size = 7,
- Ram_width = 8,
- Ram_speed = 0x30,
- Rom_size = 0xc0,
- Ram_split_shift = 16,
- Ram_split = 3 << Ram_split_shift,
- Xcvr_shift = 20,
- Xcvr = 7 << Xcvr_shift,
- Autoselect = 0x1000000,
-};
-
-enum Window4 {
- Wn4_NetDiag = 6, Wn4_Media = 10, /* Window 4: Xcvr/media bits. */
-};
-enum Win4_Media_bits {
- Media_SQE = 0x0008, /* Enable SQE error counting for AUI. */
- Media_10TP = 0x00C0, /* Enable link beat and jabber for 10baseT. */
- Media_Lnk = 0x0080, /* Enable just link beat for 100TX/100FX. */
- Media_LnkBeat = 0x0800,
-};
-enum Window7 { /* Window 7: Bus Master control. */
- Wn7_MasterAddr = 0, Wn7_MasterLen = 6, Wn7_MasterStatus = 12,
-};
-
-/* Boomerang-style bus master control registers. Note ISA aliases! */
-enum MasterCtrl {
- PktStatus = 0x400, DownListPtr = 0x404, FragAddr = 0x408, FragLen =
- 0x40c,
- TxFreeThreshold = 0x40f, UpPktStatus = 0x410, UpListPtr = 0x418,
-};
-
-/* The Rx and Tx descriptor lists.
- Caution Alpha hackers: these types are 32 bits! Note also the 8 byte
- alignment contraint on tx_ring[] and rx_ring[]. */
-struct boom_rx_desc {
- u32 next;
- s32 status;
- u32 addr;
- s32 length;
-};
-
-/* Values for the Rx status entry. */
-enum rx_desc_status {
- RxDComplete = 0x00008000, RxDError = 0x4000,
- /* See boomerang_rx() for actual error bits */
-};
-
-struct boom_tx_desc {
- u32 next;
- s32 status;
- u32 addr;
- s32 length;
-};
-
-struct corkscrew_private {
- const char *product_name;
- struct list_head list;
- struct net_device *our_dev;
- /* The Rx and Tx rings are here to keep them quad-word-aligned. */
- struct boom_rx_desc rx_ring[RX_RING_SIZE];
- struct boom_tx_desc tx_ring[TX_RING_SIZE];
- /* The addresses of transmit- and receive-in-place skbuffs. */
- struct sk_buff *rx_skbuff[RX_RING_SIZE];
- struct sk_buff *tx_skbuff[TX_RING_SIZE];
- unsigned int cur_rx, cur_tx; /* The next free ring entry */
- unsigned int dirty_rx, dirty_tx;/* The ring entries to be free()ed. */
- struct sk_buff *tx_skb; /* Packet being eaten by bus master ctrl. */
- struct timer_list timer; /* Media selection timer. */
- int capabilities ; /* Adapter capabilities word. */
- int options; /* User-settable misc. driver options. */
- int last_rx_packets; /* For media autoselection. */
- unsigned int available_media:8, /* From Wn3_Options */
- media_override:3, /* Passed-in media type. */
- default_media:3, /* Read from the EEPROM. */
- full_duplex:1, autoselect:1, bus_master:1, /* Vortex can only do a fragment bus-m. */
- full_bus_master_tx:1, full_bus_master_rx:1, /* Boomerang */
- tx_full:1;
- spinlock_t lock;
- struct device *dev;
-};
-
-/* The action to take with a media selection timer tick.
- Note that we deviate from the 3Com order by checking 10base2 before AUI.
- */
-enum xcvr_types {
- XCVR_10baseT = 0, XCVR_AUI, XCVR_10baseTOnly, XCVR_10base2, XCVR_100baseTx,
- XCVR_100baseFx, XCVR_MII = 6, XCVR_Default = 8,
-};
-
-static struct media_table {
- char *name;
- unsigned int media_bits:16, /* Bits to set in Wn4_Media register. */
- mask:8, /* The transceiver-present bit in Wn3_Config. */
- next:8; /* The media type to try next. */
- short wait; /* Time before we check media status. */
-} media_tbl[] = {
- { "10baseT", Media_10TP, 0x08, XCVR_10base2, (14 * HZ) / 10 },
- { "10Mbs AUI", Media_SQE, 0x20, XCVR_Default, (1 * HZ) / 10},
- { "undefined", 0, 0x80, XCVR_10baseT, 10000},
- { "10base2", 0, 0x10, XCVR_AUI, (1 * HZ) / 10},
- { "100baseTX", Media_Lnk, 0x02, XCVR_100baseFx, (14 * HZ) / 10},
- { "100baseFX", Media_Lnk, 0x04, XCVR_MII, (14 * HZ) / 10},
- { "MII", 0, 0x40, XCVR_10baseT, 3 * HZ},
- { "undefined", 0, 0x01, XCVR_10baseT, 10000},
- { "Default", 0, 0xFF, XCVR_10baseT, 10000},
-};
-
-#ifdef __ISAPNP__
-static struct isapnp_device_id corkscrew_isapnp_adapters[] = {
- { ISAPNP_ANY_ID, ISAPNP_ANY_ID,
- ISAPNP_VENDOR('T', 'C', 'M'), ISAPNP_FUNCTION(0x5051),
- (long) "3Com Fast EtherLink ISA" },
- { } /* terminate list */
-};
-
-MODULE_DEVICE_TABLE(isapnp, corkscrew_isapnp_adapters);
-
-static int nopnp;
-#endif /* __ISAPNP__ */
-
-static struct net_device *corkscrew_scan(int unit);
-static int corkscrew_setup(struct net_device *dev, int ioaddr,
- struct pnp_dev *idev, int card_number);
-static int corkscrew_open(struct net_device *dev);
-static void corkscrew_timer(unsigned long arg);
-static netdev_tx_t corkscrew_start_xmit(struct sk_buff *skb,
- struct net_device *dev);
-static int corkscrew_rx(struct net_device *dev);
-static void corkscrew_timeout(struct net_device *dev);
-static int boomerang_rx(struct net_device *dev);
-static irqreturn_t corkscrew_interrupt(int irq, void *dev_id);
-static int corkscrew_close(struct net_device *dev);
-static void update_stats(int addr, struct net_device *dev);
-static struct net_device_stats *corkscrew_get_stats(struct net_device *dev);
-static void set_rx_mode(struct net_device *dev);
-static const struct ethtool_ops netdev_ethtool_ops;
-
-
-/*
- Unfortunately maximizing the shared code between the integrated and
- module version of the driver results in a complicated set of initialization
- procedures.
- init_module() -- modules / tc59x_init() -- built-in
- The wrappers for corkscrew_scan()
- corkscrew_scan() The common routine that scans for PCI and EISA cards
- corkscrew_found_device() Allocate a device structure when we find a card.
- Different versions exist for modules and built-in.
- corkscrew_probe1() Fill in the device structure -- this is separated
- so that the modules code can put it in dev->init.
-*/
-/* This driver uses 'options' to pass the media type, full-duplex flag, etc. */
-/* Note: this is the only limit on the number of cards supported!! */
-static int options[MAX_UNITS] = { -1, -1, -1, -1, -1, -1, -1, -1, };
-
-#ifdef MODULE
-static int debug = -1;
-
-module_param(debug, int, 0);
-module_param_array(options, int, NULL, 0);
-module_param(rx_copybreak, int, 0);
-module_param(max_interrupt_work, int, 0);
-MODULE_PARM_DESC(debug, "3c515 debug level (0-6)");
-MODULE_PARM_DESC(options, "3c515: Bits 0-2: media type, bit 3: full duplex, bit 4: bus mastering");
-MODULE_PARM_DESC(rx_copybreak, "3c515 copy breakpoint for copy-only-tiny-frames");
-MODULE_PARM_DESC(max_interrupt_work, "3c515 maximum events handled per interrupt");
-
-/* A list of all installed Vortex devices, for removing the driver module. */
-/* we will need locking (and refcounting) if we ever use it for more */
-static LIST_HEAD(root_corkscrew_dev);
-
-int init_module(void)
-{
- int found = 0;
- if (debug >= 0)
- corkscrew_debug = debug;
- if (corkscrew_debug)
- pr_debug("%s", version);
- while (corkscrew_scan(-1))
- found++;
- return found ? 0 : -ENODEV;
-}
-
-#else
-struct net_device *tc515_probe(int unit)
-{
- struct net_device *dev = corkscrew_scan(unit);
- static int printed;
-
- if (!dev)
- return ERR_PTR(-ENODEV);
-
- if (corkscrew_debug > 0 && !printed) {
- printed = 1;
- pr_debug("%s", version);
- }
-
- return dev;
-}
-#endif /* not MODULE */
-
-static int check_device(unsigned ioaddr)
-{
- int timer;
-
- if (!request_region(ioaddr, CORKSCREW_TOTAL_SIZE, "3c515"))
- return 0;
- /* Check the resource configuration for a matching ioaddr. */
- if ((inw(ioaddr + 0x2002) & 0x1f0) != (ioaddr & 0x1f0)) {
- release_region(ioaddr, CORKSCREW_TOTAL_SIZE);
- return 0;
- }
- /* Verify by reading the device ID from the EEPROM. */
- outw(EEPROM_Read + 7, ioaddr + Wn0EepromCmd);
- /* Pause for at least 162 us. for the read to take place. */
- for (timer = 4; timer >= 0; timer--) {
- udelay(162);
- if ((inw(ioaddr + Wn0EepromCmd) & 0x0200) == 0)
- break;
- }
- if (inw(ioaddr + Wn0EepromData) != 0x6d50) {
- release_region(ioaddr, CORKSCREW_TOTAL_SIZE);
- return 0;
- }
- return 1;
-}
-
-static void cleanup_card(struct net_device *dev)
-{
- struct corkscrew_private *vp = netdev_priv(dev);
- list_del_init(&vp->list);
- if (dev->dma)
- free_dma(dev->dma);
- outw(TotalReset, dev->base_addr + EL3_CMD);
- release_region(dev->base_addr, CORKSCREW_TOTAL_SIZE);
- if (vp->dev)
- pnp_device_detach(to_pnp_dev(vp->dev));
-}
-
-static struct net_device *corkscrew_scan(int unit)
-{
- struct net_device *dev;
- static int cards_found = 0;
- static int ioaddr;
- int err;
-#ifdef __ISAPNP__
- short i;
- static int pnp_cards;
-#endif
-
- dev = alloc_etherdev(sizeof(struct corkscrew_private));
- if (!dev)
- return ERR_PTR(-ENOMEM);
-
- if (unit >= 0) {
- sprintf(dev->name, "eth%d", unit);
- netdev_boot_setup_check(dev);
- }
-
-#ifdef __ISAPNP__
- if(nopnp == 1)
- goto no_pnp;
- for(i=0; corkscrew_isapnp_adapters[i].vendor != 0; i++) {
- struct pnp_dev *idev = NULL;
- int irq;
- while((idev = pnp_find_dev(NULL,
- corkscrew_isapnp_adapters[i].vendor,
- corkscrew_isapnp_adapters[i].function,
- idev))) {
-
- if (pnp_device_attach(idev) < 0)
- continue;
- if (pnp_activate_dev(idev) < 0) {
- pr_warning("pnp activate failed (out of resources?)\n");
- pnp_device_detach(idev);
- continue;
- }
- if (!pnp_port_valid(idev, 0) || !pnp_irq_valid(idev, 0)) {
- pnp_device_detach(idev);
- continue;
- }
- ioaddr = pnp_port_start(idev, 0);
- irq = pnp_irq(idev, 0);
- if (!check_device(ioaddr)) {
- pnp_device_detach(idev);
- continue;
- }
- if(corkscrew_debug)
- pr_debug("ISAPNP reports %s at i/o 0x%x, irq %d\n",
- (char*) corkscrew_isapnp_adapters[i].driver_data, ioaddr, irq);
- pr_info("3c515 Resource configuration register %#4.4x, DCR %4.4x.\n",
- inl(ioaddr + 0x2002), inw(ioaddr + 0x2000));
- /* irq = inw(ioaddr + 0x2002) & 15; */ /* Use the irq from isapnp */
- SET_NETDEV_DEV(dev, &idev->dev);
- pnp_cards++;
- err = corkscrew_setup(dev, ioaddr, idev, cards_found++);
- if (!err)
- return dev;
- cleanup_card(dev);
- }
- }
-no_pnp:
-#endif /* __ISAPNP__ */
-
- /* Check all locations on the ISA bus -- evil! */
- for (ioaddr = 0x100; ioaddr < 0x400; ioaddr += 0x20) {
- if (!check_device(ioaddr))
- continue;
-
- pr_info("3c515 Resource configuration register %#4.4x, DCR %4.4x.\n",
- inl(ioaddr + 0x2002), inw(ioaddr + 0x2000));
- err = corkscrew_setup(dev, ioaddr, NULL, cards_found++);
- if (!err)
- return dev;
- cleanup_card(dev);
- }
- free_netdev(dev);
- return NULL;
-}
-
-
-static const struct net_device_ops netdev_ops = {
- .ndo_open = corkscrew_open,
- .ndo_stop = corkscrew_close,
- .ndo_start_xmit = corkscrew_start_xmit,
- .ndo_tx_timeout = corkscrew_timeout,
- .ndo_get_stats = corkscrew_get_stats,
- .ndo_set_multicast_list = set_rx_mode,
- .ndo_change_mtu = eth_change_mtu,
- .ndo_set_mac_address = eth_mac_addr,
- .ndo_validate_addr = eth_validate_addr,
-};
-
-
-static int corkscrew_setup(struct net_device *dev, int ioaddr,
- struct pnp_dev *idev, int card_number)
-{
- struct corkscrew_private *vp = netdev_priv(dev);
- unsigned int eeprom[0x40], checksum = 0; /* EEPROM contents */
- int i;
- int irq;
-
-#ifdef __ISAPNP__
- if (idev) {
- irq = pnp_irq(idev, 0);
- vp->dev = &idev->dev;
- } else {
- irq = inw(ioaddr + 0x2002) & 15;
- }
-#else
- irq = inw(ioaddr + 0x2002) & 15;
-#endif
-
- dev->base_addr = ioaddr;
- dev->irq = irq;
- dev->dma = inw(ioaddr + 0x2000) & 7;
- vp->product_name = "3c515";
- vp->options = dev->mem_start;
- vp->our_dev = dev;
-
- if (!vp->options) {
- if (card_number >= MAX_UNITS)
- vp->options = -1;
- else
- vp->options = options[card_number];
- }
-
- if (vp->options >= 0) {
- vp->media_override = vp->options & 7;
- if (vp->media_override == 2)
- vp->media_override = 0;
- vp->full_duplex = (vp->options & 8) ? 1 : 0;
- vp->bus_master = (vp->options & 16) ? 1 : 0;
- } else {
- vp->media_override = 7;
- vp->full_duplex = 0;
- vp->bus_master = 0;
- }
-#ifdef MODULE
- list_add(&vp->list, &root_corkscrew_dev);
-#endif
-
- pr_info("%s: 3Com %s at %#3x,", dev->name, vp->product_name, ioaddr);
-
- spin_lock_init(&vp->lock);
-
- /* Read the station address from the EEPROM. */
- EL3WINDOW(0);
- for (i = 0; i < 0x18; i++) {
- __be16 *phys_addr = (__be16 *) dev->dev_addr;
- int timer;
- outw(EEPROM_Read + i, ioaddr + Wn0EepromCmd);
- /* Pause for at least 162 us. for the read to take place. */
- for (timer = 4; timer >= 0; timer--) {
- udelay(162);
- if ((inw(ioaddr + Wn0EepromCmd) & 0x0200) == 0)
- break;
- }
- eeprom[i] = inw(ioaddr + Wn0EepromData);
- checksum ^= eeprom[i];
- if (i < 3)
- phys_addr[i] = htons(eeprom[i]);
- }
- checksum = (checksum ^ (checksum >> 8)) & 0xff;
- if (checksum != 0x00)
- pr_cont(" ***INVALID CHECKSUM %4.4x*** ", checksum);
- pr_cont(" %pM", dev->dev_addr);
- if (eeprom[16] == 0x11c7) { /* Corkscrew */
- if (request_dma(dev->dma, "3c515")) {
- pr_cont(", DMA %d allocation failed", dev->dma);
- dev->dma = 0;
- } else
- pr_cont(", DMA %d", dev->dma);
- }
- pr_cont(", IRQ %d\n", dev->irq);
- /* Tell them about an invalid IRQ. */
- if (corkscrew_debug && (dev->irq <= 0 || dev->irq > 15))
- pr_warning(" *** Warning: this IRQ is unlikely to work! ***\n");
-
- {
- static const char * const ram_split[] = {
- "5:3", "3:1", "1:1", "3:5"
- };
- __u32 config;
- EL3WINDOW(3);
- vp->available_media = inw(ioaddr + Wn3_Options);
- config = inl(ioaddr + Wn3_Config);
- if (corkscrew_debug > 1)
- pr_info(" Internal config register is %4.4x, transceivers %#x.\n",
- config, inw(ioaddr + Wn3_Options));
- pr_info(" %dK %s-wide RAM %s Rx:Tx split, %s%s interface.\n",
- 8 << config & Ram_size,
- config & Ram_width ? "word" : "byte",
- ram_split[(config & Ram_split) >> Ram_split_shift],
- config & Autoselect ? "autoselect/" : "",
- media_tbl[(config & Xcvr) >> Xcvr_shift].name);
- vp->default_media = (config & Xcvr) >> Xcvr_shift;
- vp->autoselect = config & Autoselect ? 1 : 0;
- dev->if_port = vp->default_media;
- }
- if (vp->media_override != 7) {
- pr_info(" Media override to transceiver type %d (%s).\n",
- vp->media_override,
- media_tbl[vp->media_override].name);
- dev->if_port = vp->media_override;
- }
-
- vp->capabilities = eeprom[16];
- vp->full_bus_master_tx = (vp->capabilities & 0x20) ? 1 : 0;
- /* Rx is broken at 10mbps, so we always disable it. */
- /* vp->full_bus_master_rx = 0; */
- vp->full_bus_master_rx = (vp->capabilities & 0x20) ? 1 : 0;
-
- /* The 3c51x-specific entries in the device structure. */
- dev->netdev_ops = &netdev_ops;
- dev->watchdog_timeo = (400 * HZ) / 1000;
- dev->ethtool_ops = &netdev_ethtool_ops;
-
- return register_netdev(dev);
-}
-
-
-static int corkscrew_open(struct net_device *dev)
-{
- int ioaddr = dev->base_addr;
- struct corkscrew_private *vp = netdev_priv(dev);
- __u32 config;
- int i;
-
- /* Before initializing select the active media port. */
- EL3WINDOW(3);
- if (vp->full_duplex)
- outb(0x20, ioaddr + Wn3_MAC_Ctrl); /* Set the full-duplex bit. */
- config = inl(ioaddr + Wn3_Config);
-
- if (vp->media_override != 7) {
- if (corkscrew_debug > 1)
- pr_info("%s: Media override to transceiver %d (%s).\n",
- dev->name, vp->media_override,
- media_tbl[vp->media_override].name);
- dev->if_port = vp->media_override;
- } else if (vp->autoselect) {
- /* Find first available media type, starting with 100baseTx. */
- dev->if_port = 4;
- while (!(vp->available_media & media_tbl[dev->if_port].mask))
- dev->if_port = media_tbl[dev->if_port].next;
-
- if (corkscrew_debug > 1)
- pr_debug("%s: Initial media type %s.\n",
- dev->name, media_tbl[dev->if_port].name);
-
- init_timer(&vp->timer);
- vp->timer.expires = jiffies + media_tbl[dev->if_port].wait;
- vp->timer.data = (unsigned long) dev;
- vp->timer.function = corkscrew_timer; /* timer handler */
- add_timer(&vp->timer);
- } else
- dev->if_port = vp->default_media;
-
- config = (config & ~Xcvr) | (dev->if_port << Xcvr_shift);
- outl(config, ioaddr + Wn3_Config);
-
- if (corkscrew_debug > 1) {
- pr_debug("%s: corkscrew_open() InternalConfig %8.8x.\n",
- dev->name, config);
- }
-
- outw(TxReset, ioaddr + EL3_CMD);
- for (i = 20; i >= 0; i--)
- if (!(inw(ioaddr + EL3_STATUS) & CmdInProgress))
- break;
-
- outw(RxReset, ioaddr + EL3_CMD);
- /* Wait a few ticks for the RxReset command to complete. */
- for (i = 20; i >= 0; i--)
- if (!(inw(ioaddr + EL3_STATUS) & CmdInProgress))
- break;
-
- outw(SetStatusEnb | 0x00, ioaddr + EL3_CMD);
-
- /* Use the now-standard shared IRQ implementation. */
- if (vp->capabilities == 0x11c7) {
- /* Corkscrew: Cannot share ISA resources. */
- if (dev->irq == 0 ||
- dev->dma == 0 ||
- request_irq(dev->irq, corkscrew_interrupt, 0,
- vp->product_name, dev))
- return -EAGAIN;
- enable_dma(dev->dma);
- set_dma_mode(dev->dma, DMA_MODE_CASCADE);
- } else if (request_irq(dev->irq, corkscrew_interrupt, IRQF_SHARED,
- vp->product_name, dev)) {
- return -EAGAIN;
- }
-
- if (corkscrew_debug > 1) {
- EL3WINDOW(4);
- pr_debug("%s: corkscrew_open() irq %d media status %4.4x.\n",
- dev->name, dev->irq, inw(ioaddr + Wn4_Media));
- }
-
- /* Set the station address and mask in window 2 each time opened. */
- EL3WINDOW(2);
- for (i = 0; i < 6; i++)
- outb(dev->dev_addr[i], ioaddr + i);
- for (; i < 12; i += 2)
- outw(0, ioaddr + i);
-
- if (dev->if_port == 3)
- /* Start the thinnet transceiver. We should really wait 50ms... */
- outw(StartCoax, ioaddr + EL3_CMD);
- EL3WINDOW(4);
- outw((inw(ioaddr + Wn4_Media) & ~(Media_10TP | Media_SQE)) |
- media_tbl[dev->if_port].media_bits, ioaddr + Wn4_Media);
-
- /* Switch to the stats window, and clear all stats by reading. */
- outw(StatsDisable, ioaddr + EL3_CMD);
- EL3WINDOW(6);
- for (i = 0; i < 10; i++)
- inb(ioaddr + i);
- inw(ioaddr + 10);
- inw(ioaddr + 12);
- /* New: On the Vortex we must also clear the BadSSD counter. */
- EL3WINDOW(4);
- inb(ioaddr + 12);
- /* ..and on the Boomerang we enable the extra statistics bits. */
- outw(0x0040, ioaddr + Wn4_NetDiag);
-
- /* Switch to register set 7 for normal use. */
- EL3WINDOW(7);
-
- if (vp->full_bus_master_rx) { /* Boomerang bus master. */
- vp->cur_rx = vp->dirty_rx = 0;
- if (corkscrew_debug > 2)
- pr_debug("%s: Filling in the Rx ring.\n", dev->name);
- for (i = 0; i < RX_RING_SIZE; i++) {
- struct sk_buff *skb;
- if (i < (RX_RING_SIZE - 1))
- vp->rx_ring[i].next =
- isa_virt_to_bus(&vp->rx_ring[i + 1]);
- else
- vp->rx_ring[i].next = 0;
- vp->rx_ring[i].status = 0; /* Clear complete bit. */
- vp->rx_ring[i].length = PKT_BUF_SZ | 0x80000000;
- skb = dev_alloc_skb(PKT_BUF_SZ);
- vp->rx_skbuff[i] = skb;
- if (skb == NULL)
- break; /* Bad news! */
- skb->dev = dev; /* Mark as being used by this device. */
- skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
- vp->rx_ring[i].addr = isa_virt_to_bus(skb->data);
- }
- if (i != 0)
- vp->rx_ring[i - 1].next =
- isa_virt_to_bus(&vp->rx_ring[0]); /* Wrap the ring. */
- outl(isa_virt_to_bus(&vp->rx_ring[0]), ioaddr + UpListPtr);
- }
- if (vp->full_bus_master_tx) { /* Boomerang bus master Tx. */
- vp->cur_tx = vp->dirty_tx = 0;
- outb(PKT_BUF_SZ >> 8, ioaddr + TxFreeThreshold); /* Room for a packet. */
- /* Clear the Tx ring. */
- for (i = 0; i < TX_RING_SIZE; i++)
- vp->tx_skbuff[i] = NULL;
- outl(0, ioaddr + DownListPtr);
- }
- /* Set receiver mode: presumably accept b-case and phys addr only. */
- set_rx_mode(dev);
- outw(StatsEnable, ioaddr + EL3_CMD); /* Turn on statistics. */
-
- netif_start_queue(dev);
-
- outw(RxEnable, ioaddr + EL3_CMD); /* Enable the receiver. */
- outw(TxEnable, ioaddr + EL3_CMD); /* Enable transmitter. */
- /* Allow status bits to be seen. */
- outw(SetStatusEnb | AdapterFailure | IntReq | StatsFull |
- (vp->full_bus_master_tx ? DownComplete : TxAvailable) |
- (vp->full_bus_master_rx ? UpComplete : RxComplete) |
- (vp->bus_master ? DMADone : 0), ioaddr + EL3_CMD);
- /* Ack all pending events, and set active indicator mask. */
- outw(AckIntr | IntLatch | TxAvailable | RxEarly | IntReq,
- ioaddr + EL3_CMD);
- outw(SetIntrEnb | IntLatch | TxAvailable | RxComplete | StatsFull
- | (vp->bus_master ? DMADone : 0) | UpComplete | DownComplete,
- ioaddr + EL3_CMD);
-
- return 0;
-}
-
-static void corkscrew_timer(unsigned long data)
-{
-#ifdef AUTOMEDIA
- struct net_device *dev = (struct net_device *) data;
- struct corkscrew_private *vp = netdev_priv(dev);
- int ioaddr = dev->base_addr;
- unsigned long flags;
- int ok = 0;
-
- if (corkscrew_debug > 1)
- pr_debug("%s: Media selection timer tick happened, %s.\n",
- dev->name, media_tbl[dev->if_port].name);
-
- spin_lock_irqsave(&vp->lock, flags);
-
- {
- int old_window = inw(ioaddr + EL3_CMD) >> 13;
- int media_status;
- EL3WINDOW(4);
- media_status = inw(ioaddr + Wn4_Media);
- switch (dev->if_port) {
- case 0:
- case 4:
- case 5: /* 10baseT, 100baseTX, 100baseFX */
- if (media_status & Media_LnkBeat) {
- ok = 1;
- if (corkscrew_debug > 1)
- pr_debug("%s: Media %s has link beat, %x.\n",
- dev->name,
- media_tbl[dev->if_port].name,
- media_status);
- } else if (corkscrew_debug > 1)
- pr_debug("%s: Media %s is has no link beat, %x.\n",
- dev->name,
- media_tbl[dev->if_port].name,
- media_status);
-
- break;
- default: /* Other media types handled by Tx timeouts. */
- if (corkscrew_debug > 1)
- pr_debug("%s: Media %s is has no indication, %x.\n",
- dev->name,
- media_tbl[dev->if_port].name,
- media_status);
- ok = 1;
- }
- if (!ok) {
- __u32 config;
-
- do {
- dev->if_port =
- media_tbl[dev->if_port].next;
- }
- while (!(vp->available_media & media_tbl[dev->if_port].mask));
-
- if (dev->if_port == 8) { /* Go back to default. */
- dev->if_port = vp->default_media;
- if (corkscrew_debug > 1)
- pr_debug("%s: Media selection failing, using default %s port.\n",
- dev->name,
- media_tbl[dev->if_port].name);
- } else {
- if (corkscrew_debug > 1)
- pr_debug("%s: Media selection failed, now trying %s port.\n",
- dev->name,
- media_tbl[dev->if_port].name);
- vp->timer.expires = jiffies + media_tbl[dev->if_port].wait;
- add_timer(&vp->timer);
- }
- outw((media_status & ~(Media_10TP | Media_SQE)) |
- media_tbl[dev->if_port].media_bits,
- ioaddr + Wn4_Media);
-
- EL3WINDOW(3);
- config = inl(ioaddr + Wn3_Config);
- config = (config & ~Xcvr) | (dev->if_port << Xcvr_shift);
- outl(config, ioaddr + Wn3_Config);
-
- outw(dev->if_port == 3 ? StartCoax : StopCoax,
- ioaddr + EL3_CMD);
- }
- EL3WINDOW(old_window);
- }
-
- spin_unlock_irqrestore(&vp->lock, flags);
- if (corkscrew_debug > 1)
- pr_debug("%s: Media selection timer finished, %s.\n",
- dev->name, media_tbl[dev->if_port].name);
-
-#endif /* AUTOMEDIA */
-}
-
-static void corkscrew_timeout(struct net_device *dev)
-{
- int i;
- struct corkscrew_private *vp = netdev_priv(dev);
- int ioaddr = dev->base_addr;
-
- pr_warning("%s: transmit timed out, tx_status %2.2x status %4.4x.\n",
- dev->name, inb(ioaddr + TxStatus),
- inw(ioaddr + EL3_STATUS));
- /* Slight code bloat to be user friendly. */
- if ((inb(ioaddr + TxStatus) & 0x88) == 0x88)
- pr_warning("%s: Transmitter encountered 16 collisions --"
- " network cable problem?\n", dev->name);
-#ifndef final_version
- pr_debug(" Flags; bus-master %d, full %d; dirty %d current %d.\n",
- vp->full_bus_master_tx, vp->tx_full, vp->dirty_tx,
- vp->cur_tx);
- pr_debug(" Down list %8.8x vs. %p.\n", inl(ioaddr + DownListPtr),
- &vp->tx_ring[0]);
- for (i = 0; i < TX_RING_SIZE; i++) {
- pr_debug(" %d: %p length %8.8x status %8.8x\n", i,
- &vp->tx_ring[i],
- vp->tx_ring[i].length, vp->tx_ring[i].status);
- }
-#endif
- /* Issue TX_RESET and TX_START commands. */
- outw(TxReset, ioaddr + EL3_CMD);
- for (i = 20; i >= 0; i--)
- if (!(inw(ioaddr + EL3_STATUS) & CmdInProgress))
- break;
- outw(TxEnable, ioaddr + EL3_CMD);
- dev->trans_start = jiffies; /* prevent tx timeout */
- dev->stats.tx_errors++;
- dev->stats.tx_dropped++;
- netif_wake_queue(dev);
-}
-
-static netdev_tx_t corkscrew_start_xmit(struct sk_buff *skb,
- struct net_device *dev)
-{
- struct corkscrew_private *vp = netdev_priv(dev);
- int ioaddr = dev->base_addr;
-
- /* Block a timer-based transmit from overlapping. */
-
- netif_stop_queue(dev);
-
- if (vp->full_bus_master_tx) { /* BOOMERANG bus-master */
- /* Calculate the next Tx descriptor entry. */
- int entry = vp->cur_tx % TX_RING_SIZE;
- struct boom_tx_desc *prev_entry;
- unsigned long flags;
- int i;
-
- if (vp->tx_full) /* No room to transmit with */
- return NETDEV_TX_BUSY;
- if (vp->cur_tx != 0)
- prev_entry = &vp->tx_ring[(vp->cur_tx - 1) % TX_RING_SIZE];
- else
- prev_entry = NULL;
- if (corkscrew_debug > 3)
- pr_debug("%s: Trying to send a packet, Tx index %d.\n",
- dev->name, vp->cur_tx);
- /* vp->tx_full = 1; */
- vp->tx_skbuff[entry] = skb;
- vp->tx_ring[entry].next = 0;
- vp->tx_ring[entry].addr = isa_virt_to_bus(skb->data);
- vp->tx_ring[entry].length = skb->len | 0x80000000;
- vp->tx_ring[entry].status = skb->len | 0x80000000;
-
- spin_lock_irqsave(&vp->lock, flags);
- outw(DownStall, ioaddr + EL3_CMD);
- /* Wait for the stall to complete. */
- for (i = 20; i >= 0; i--)
- if ((inw(ioaddr + EL3_STATUS) & CmdInProgress) == 0)
- break;
- if (prev_entry)
- prev_entry->next = isa_virt_to_bus(&vp->tx_ring[entry]);
- if (inl(ioaddr + DownListPtr) == 0) {
- outl(isa_virt_to_bus(&vp->tx_ring[entry]),
- ioaddr + DownListPtr);
- queued_packet++;
- }
- outw(DownUnstall, ioaddr + EL3_CMD);
- spin_unlock_irqrestore(&vp->lock, flags);
-
- vp->cur_tx++;
- if (vp->cur_tx - vp->dirty_tx > TX_RING_SIZE - 1)
- vp->tx_full = 1;
- else { /* Clear previous interrupt enable. */
- if (prev_entry)
- prev_entry->status &= ~0x80000000;
- netif_wake_queue(dev);
- }
- return NETDEV_TX_OK;
- }
- /* Put out the doubleword header... */
- outl(skb->len, ioaddr + TX_FIFO);
- dev->stats.tx_bytes += skb->len;
-#ifdef VORTEX_BUS_MASTER
- if (vp->bus_master) {
- /* Set the bus-master controller to transfer the packet. */
- outl((int) (skb->data), ioaddr + Wn7_MasterAddr);
- outw((skb->len + 3) & ~3, ioaddr + Wn7_MasterLen);
- vp->tx_skb = skb;
- outw(StartDMADown, ioaddr + EL3_CMD);
- /* queue will be woken at the DMADone interrupt. */
- } else {
- /* ... and the packet rounded to a doubleword. */
- outsl(ioaddr + TX_FIFO, skb->data, (skb->len + 3) >> 2);
- dev_kfree_skb(skb);
- if (inw(ioaddr + TxFree) > 1536) {
- netif_wake_queue(dev);
- } else
- /* Interrupt us when the FIFO has room for max-sized packet. */
- outw(SetTxThreshold + (1536 >> 2),
- ioaddr + EL3_CMD);
- }
-#else
- /* ... and the packet rounded to a doubleword. */
- outsl(ioaddr + TX_FIFO, skb->data, (skb->len + 3) >> 2);
- dev_kfree_skb(skb);
- if (inw(ioaddr + TxFree) > 1536) {
- netif_wake_queue(dev);
- } else
- /* Interrupt us when the FIFO has room for max-sized packet. */
- outw(SetTxThreshold + (1536 >> 2), ioaddr + EL3_CMD);
-#endif /* bus master */
-
-
- /* Clear the Tx status stack. */
- {
- short tx_status;
- int i = 4;
-
- while (--i > 0 && (tx_status = inb(ioaddr + TxStatus)) > 0) {
- if (tx_status & 0x3C) { /* A Tx-disabling error occurred. */
- if (corkscrew_debug > 2)
- pr_debug("%s: Tx error, status %2.2x.\n",
- dev->name, tx_status);
- if (tx_status & 0x04)
- dev->stats.tx_fifo_errors++;
- if (tx_status & 0x38)
- dev->stats.tx_aborted_errors++;
- if (tx_status & 0x30) {
- int j;
- outw(TxReset, ioaddr + EL3_CMD);
- for (j = 20; j >= 0; j--)
- if (!(inw(ioaddr + EL3_STATUS) & CmdInProgress))
- break;
- }
- outw(TxEnable, ioaddr + EL3_CMD);
- }
- outb(0x00, ioaddr + TxStatus); /* Pop the status stack. */
- }
- }
- return NETDEV_TX_OK;
-}
-
-/* The interrupt handler does all of the Rx thread work and cleans up
- after the Tx thread. */
-
-static irqreturn_t corkscrew_interrupt(int irq, void *dev_id)
-{
- /* Use the now-standard shared IRQ implementation. */
- struct net_device *dev = dev_id;
- struct corkscrew_private *lp = netdev_priv(dev);
- int ioaddr, status;
- int latency;
- int i = max_interrupt_work;
-
- ioaddr = dev->base_addr;
- latency = inb(ioaddr + Timer);
-
- spin_lock(&lp->lock);
-
- status = inw(ioaddr + EL3_STATUS);
-
- if (corkscrew_debug > 4)
- pr_debug("%s: interrupt, status %4.4x, timer %d.\n",
- dev->name, status, latency);
- if ((status & 0xE000) != 0xE000) {
- static int donedidthis;
- /* Some interrupt controllers store a bogus interrupt from boot-time.
- Ignore a single early interrupt, but don't hang the machine for
- other interrupt problems. */
- if (donedidthis++ > 100) {
- pr_err("%s: Bogus interrupt, bailing. Status %4.4x, start=%d.\n",
- dev->name, status, netif_running(dev));
- free_irq(dev->irq, dev);
- dev->irq = -1;
- }
- }
-
- do {
- if (corkscrew_debug > 5)
- pr_debug("%s: In interrupt loop, status %4.4x.\n",
- dev->name, status);
- if (status & RxComplete)
- corkscrew_rx(dev);
-
- if (status & TxAvailable) {
- if (corkscrew_debug > 5)
- pr_debug(" TX room bit was handled.\n");
- /* There's room in the FIFO for a full-sized packet. */
- outw(AckIntr | TxAvailable, ioaddr + EL3_CMD);
- netif_wake_queue(dev);
- }
- if (status & DownComplete) {
- unsigned int dirty_tx = lp->dirty_tx;
-
- while (lp->cur_tx - dirty_tx > 0) {
- int entry = dirty_tx % TX_RING_SIZE;
- if (inl(ioaddr + DownListPtr) == isa_virt_to_bus(&lp->tx_ring[entry]))
- break; /* It still hasn't been processed. */
- if (lp->tx_skbuff[entry]) {
- dev_kfree_skb_irq(lp->tx_skbuff[entry]);
- lp->tx_skbuff[entry] = NULL;
- }
- dirty_tx++;
- }
- lp->dirty_tx = dirty_tx;
- outw(AckIntr | DownComplete, ioaddr + EL3_CMD);
- if (lp->tx_full && (lp->cur_tx - dirty_tx <= TX_RING_SIZE - 1)) {
- lp->tx_full = 0;
- netif_wake_queue(dev);
- }
- }
-#ifdef VORTEX_BUS_MASTER
- if (status & DMADone) {
- outw(0x1000, ioaddr + Wn7_MasterStatus); /* Ack the event. */
- dev_kfree_skb_irq(lp->tx_skb); /* Release the transferred buffer */
- netif_wake_queue(dev);
- }
-#endif
- if (status & UpComplete) {
- boomerang_rx(dev);
- outw(AckIntr | UpComplete, ioaddr + EL3_CMD);
- }
- if (status & (AdapterFailure | RxEarly | StatsFull)) {
- /* Handle all uncommon interrupts at once. */
- if (status & RxEarly) { /* Rx early is unused. */
- corkscrew_rx(dev);
- outw(AckIntr | RxEarly, ioaddr + EL3_CMD);
- }
- if (status & StatsFull) { /* Empty statistics. */
- static int DoneDidThat;
- if (corkscrew_debug > 4)
- pr_debug("%s: Updating stats.\n", dev->name);
- update_stats(ioaddr, dev);
- /* DEBUG HACK: Disable statistics as an interrupt source. */
- /* This occurs when we have the wrong media type! */
- if (DoneDidThat == 0 && inw(ioaddr + EL3_STATUS) & StatsFull) {
- int win, reg;
- pr_notice("%s: Updating stats failed, disabling stats as an interrupt source.\n",
- dev->name);
- for (win = 0; win < 8; win++) {
- EL3WINDOW(win);
- pr_notice("Vortex window %d:", win);
- for (reg = 0; reg < 16; reg++)
- pr_cont(" %2.2x", inb(ioaddr + reg));
- pr_cont("\n");
- }
- EL3WINDOW(7);
- outw(SetIntrEnb | TxAvailable |
- RxComplete | AdapterFailure |
- UpComplete | DownComplete |
- TxComplete, ioaddr + EL3_CMD);
- DoneDidThat++;
- }
- }
- if (status & AdapterFailure) {
- /* Adapter failure requires Rx reset and reinit. */
- outw(RxReset, ioaddr + EL3_CMD);
- /* Set the Rx filter to the current state. */
- set_rx_mode(dev);
- outw(RxEnable, ioaddr + EL3_CMD); /* Re-enable the receiver. */
- outw(AckIntr | AdapterFailure,
- ioaddr + EL3_CMD);
- }
- }
-
- if (--i < 0) {
- pr_err("%s: Too much work in interrupt, status %4.4x. Disabling functions (%4.4x).\n",
- dev->name, status, SetStatusEnb | ((~status) & 0x7FE));
- /* Disable all pending interrupts. */
- outw(SetStatusEnb | ((~status) & 0x7FE), ioaddr + EL3_CMD);
- outw(AckIntr | 0x7FF, ioaddr + EL3_CMD);
- break;
- }
- /* Acknowledge the IRQ. */
- outw(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
-
- } while ((status = inw(ioaddr + EL3_STATUS)) & (IntLatch | RxComplete));
-
- spin_unlock(&lp->lock);
-
- if (corkscrew_debug > 4)
- pr_debug("%s: exiting interrupt, status %4.4x.\n", dev->name, status);
- return IRQ_HANDLED;
-}
-
-static int corkscrew_rx(struct net_device *dev)
-{
- int ioaddr = dev->base_addr;
- int i;
- short rx_status;
-
- if (corkscrew_debug > 5)
- pr_debug(" In rx_packet(), status %4.4x, rx_status %4.4x.\n",
- inw(ioaddr + EL3_STATUS), inw(ioaddr + RxStatus));
- while ((rx_status = inw(ioaddr + RxStatus)) > 0) {
- if (rx_status & 0x4000) { /* Error, update stats. */
- unsigned char rx_error = inb(ioaddr + RxErrors);
- if (corkscrew_debug > 2)
- pr_debug(" Rx error: status %2.2x.\n",
- rx_error);
- dev->stats.rx_errors++;
- if (rx_error & 0x01)
- dev->stats.rx_over_errors++;
- if (rx_error & 0x02)
- dev->stats.rx_length_errors++;
- if (rx_error & 0x04)
- dev->stats.rx_frame_errors++;
- if (rx_error & 0x08)
- dev->stats.rx_crc_errors++;
- if (rx_error & 0x10)
- dev->stats.rx_length_errors++;
- } else {
- /* The packet length: up to 4.5K!. */
- short pkt_len = rx_status & 0x1fff;
- struct sk_buff *skb;
-
- skb = dev_alloc_skb(pkt_len + 5 + 2);
- if (corkscrew_debug > 4)
- pr_debug("Receiving packet size %d status %4.4x.\n",
- pkt_len, rx_status);
- if (skb != NULL) {
- skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
- /* 'skb_put()' points to the start of sk_buff data area. */
- insl(ioaddr + RX_FIFO,
- skb_put(skb, pkt_len),
- (pkt_len + 3) >> 2);
- outw(RxDiscard, ioaddr + EL3_CMD); /* Pop top Rx packet. */
- skb->protocol = eth_type_trans(skb, dev);
- netif_rx(skb);
- dev->stats.rx_packets++;
- dev->stats.rx_bytes += pkt_len;
- /* Wait a limited time to go to next packet. */
- for (i = 200; i >= 0; i--)
- if (! (inw(ioaddr + EL3_STATUS) & CmdInProgress))
- break;
- continue;
- } else if (corkscrew_debug)
- pr_debug("%s: Couldn't allocate a sk_buff of size %d.\n", dev->name, pkt_len);
- }
- outw(RxDiscard, ioaddr + EL3_CMD);
- dev->stats.rx_dropped++;
- /* Wait a limited time to skip this packet. */
- for (i = 200; i >= 0; i--)
- if (!(inw(ioaddr + EL3_STATUS) & CmdInProgress))
- break;
- }
- return 0;
-}
-
-static int boomerang_rx(struct net_device *dev)
-{
- struct corkscrew_private *vp = netdev_priv(dev);
- int entry = vp->cur_rx % RX_RING_SIZE;
- int ioaddr = dev->base_addr;
- int rx_status;
-
- if (corkscrew_debug > 5)
- pr_debug(" In boomerang_rx(), status %4.4x, rx_status %4.4x.\n",
- inw(ioaddr + EL3_STATUS), inw(ioaddr + RxStatus));
- while ((rx_status = vp->rx_ring[entry].status) & RxDComplete) {
- if (rx_status & RxDError) { /* Error, update stats. */
- unsigned char rx_error = rx_status >> 16;
- if (corkscrew_debug > 2)
- pr_debug(" Rx error: status %2.2x.\n",
- rx_error);
- dev->stats.rx_errors++;
- if (rx_error & 0x01)
- dev->stats.rx_over_errors++;
- if (rx_error & 0x02)
- dev->stats.rx_length_errors++;
- if (rx_error & 0x04)
- dev->stats.rx_frame_errors++;
- if (rx_error & 0x08)
- dev->stats.rx_crc_errors++;
- if (rx_error & 0x10)
- dev->stats.rx_length_errors++;
- } else {
- /* The packet length: up to 4.5K!. */
- short pkt_len = rx_status & 0x1fff;
- struct sk_buff *skb;
-
- dev->stats.rx_bytes += pkt_len;
- if (corkscrew_debug > 4)
- pr_debug("Receiving packet size %d status %4.4x.\n",
- pkt_len, rx_status);
-
- /* Check if the packet is long enough to just accept without
- copying to a properly sized skbuff. */
- if (pkt_len < rx_copybreak &&
- (skb = dev_alloc_skb(pkt_len + 4)) != NULL) {
- skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
- /* 'skb_put()' points to the start of sk_buff data area. */
- memcpy(skb_put(skb, pkt_len),
- isa_bus_to_virt(vp->rx_ring[entry].
- addr), pkt_len);
- rx_copy++;
- } else {
- void *temp;
- /* Pass up the skbuff already on the Rx ring. */
- skb = vp->rx_skbuff[entry];
- vp->rx_skbuff[entry] = NULL;
- temp = skb_put(skb, pkt_len);
- /* Remove this checking code for final release. */
- if (isa_bus_to_virt(vp->rx_ring[entry].addr) != temp)
- pr_warning("%s: Warning -- the skbuff addresses do not match"
- " in boomerang_rx: %p vs. %p / %p.\n",
- dev->name,
- isa_bus_to_virt(vp->
- rx_ring[entry].
- addr), skb->head,
- temp);
- rx_nocopy++;
- }
- skb->protocol = eth_type_trans(skb, dev);
- netif_rx(skb);
- dev->stats.rx_packets++;
- }
- entry = (++vp->cur_rx) % RX_RING_SIZE;
- }
- /* Refill the Rx ring buffers. */
- for (; vp->cur_rx - vp->dirty_rx > 0; vp->dirty_rx++) {
- struct sk_buff *skb;
- entry = vp->dirty_rx % RX_RING_SIZE;
- if (vp->rx_skbuff[entry] == NULL) {
- skb = dev_alloc_skb(PKT_BUF_SZ);
- if (skb == NULL)
- break; /* Bad news! */
- skb->dev = dev; /* Mark as being used by this device. */
- skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
- vp->rx_ring[entry].addr = isa_virt_to_bus(skb->data);
- vp->rx_skbuff[entry] = skb;
- }
- vp->rx_ring[entry].status = 0; /* Clear complete bit. */
- }
- return 0;
-}
-
-static int corkscrew_close(struct net_device *dev)
-{
- struct corkscrew_private *vp = netdev_priv(dev);
- int ioaddr = dev->base_addr;
- int i;
-
- netif_stop_queue(dev);
-
- if (corkscrew_debug > 1) {
- pr_debug("%s: corkscrew_close() status %4.4x, Tx status %2.2x.\n",
- dev->name, inw(ioaddr + EL3_STATUS),
- inb(ioaddr + TxStatus));
- pr_debug("%s: corkscrew close stats: rx_nocopy %d rx_copy %d tx_queued %d.\n",
- dev->name, rx_nocopy, rx_copy, queued_packet);
- }
-
- del_timer(&vp->timer);
-
- /* Turn off statistics ASAP. We update lp->stats below. */
- outw(StatsDisable, ioaddr + EL3_CMD);
-
- /* Disable the receiver and transmitter. */
- outw(RxDisable, ioaddr + EL3_CMD);
- outw(TxDisable, ioaddr + EL3_CMD);
-
- if (dev->if_port == XCVR_10base2)
- /* Turn off thinnet power. Green! */
- outw(StopCoax, ioaddr + EL3_CMD);
-
- free_irq(dev->irq, dev);
-
- outw(SetIntrEnb | 0x0000, ioaddr + EL3_CMD);
-
- update_stats(ioaddr, dev);
- if (vp->full_bus_master_rx) { /* Free Boomerang bus master Rx buffers. */
- outl(0, ioaddr + UpListPtr);
- for (i = 0; i < RX_RING_SIZE; i++)
- if (vp->rx_skbuff[i]) {
- dev_kfree_skb(vp->rx_skbuff[i]);
- vp->rx_skbuff[i] = NULL;
- }
- }
- if (vp->full_bus_master_tx) { /* Free Boomerang bus master Tx buffers. */
- outl(0, ioaddr + DownListPtr);
- for (i = 0; i < TX_RING_SIZE; i++)
- if (vp->tx_skbuff[i]) {
- dev_kfree_skb(vp->tx_skbuff[i]);
- vp->tx_skbuff[i] = NULL;
- }
- }
-
- return 0;
-}
-
-static struct net_device_stats *corkscrew_get_stats(struct net_device *dev)
-{
- struct corkscrew_private *vp = netdev_priv(dev);
- unsigned long flags;
-
- if (netif_running(dev)) {
- spin_lock_irqsave(&vp->lock, flags);
- update_stats(dev->base_addr, dev);
- spin_unlock_irqrestore(&vp->lock, flags);
- }
- return &dev->stats;
-}
-
-/* Update statistics.
- Unlike with the EL3 we need not worry about interrupts changing
- the window setting from underneath us, but we must still guard
- against a race condition with a StatsUpdate interrupt updating the
- table. This is done by checking that the ASM (!) code generated uses
- atomic updates with '+='.
- */
-static void update_stats(int ioaddr, struct net_device *dev)
-{
- /* Unlike the 3c5x9 we need not turn off stats updates while reading. */
- /* Switch to the stats window, and read everything. */
- EL3WINDOW(6);
- dev->stats.tx_carrier_errors += inb(ioaddr + 0);
- dev->stats.tx_heartbeat_errors += inb(ioaddr + 1);
- /* Multiple collisions. */ inb(ioaddr + 2);
- dev->stats.collisions += inb(ioaddr + 3);
- dev->stats.tx_window_errors += inb(ioaddr + 4);
- dev->stats.rx_fifo_errors += inb(ioaddr + 5);
- dev->stats.tx_packets += inb(ioaddr + 6);
- dev->stats.tx_packets += (inb(ioaddr + 9) & 0x30) << 4;
- /* Rx packets */ inb(ioaddr + 7);
- /* Must read to clear */
- /* Tx deferrals */ inb(ioaddr + 8);
- /* Don't bother with register 9, an extension of registers 6&7.
- If we do use the 6&7 values the atomic update assumption above
- is invalid. */
- inw(ioaddr + 10); /* Total Rx and Tx octets. */
- inw(ioaddr + 12);
- /* New: On the Vortex we must also clear the BadSSD counter. */
- EL3WINDOW(4);
- inb(ioaddr + 12);
-
- /* We change back to window 7 (not 1) with the Vortex. */
- EL3WINDOW(7);
-}
-
-/* This new version of set_rx_mode() supports v1.4 kernels.
- The Vortex chip has no documented multicast filter, so the only
- multicast setting is to receive all multicast frames. At least
- the chip has a very clean way to set the mode, unlike many others. */
-static void set_rx_mode(struct net_device *dev)
-{
- int ioaddr = dev->base_addr;
- short new_mode;
-
- if (dev->flags & IFF_PROMISC) {
- if (corkscrew_debug > 3)
- pr_debug("%s: Setting promiscuous mode.\n",
- dev->name);
- new_mode = SetRxFilter | RxStation | RxMulticast | RxBroadcast | RxProm;
- } else if (!netdev_mc_empty(dev) || dev->flags & IFF_ALLMULTI) {
- new_mode = SetRxFilter | RxStation | RxMulticast | RxBroadcast;
- } else
- new_mode = SetRxFilter | RxStation | RxBroadcast;
-
- outw(new_mode, ioaddr + EL3_CMD);
-}
-
-static void netdev_get_drvinfo(struct net_device *dev,
- struct ethtool_drvinfo *info)
-{
- strcpy(info->driver, DRV_NAME);
- strcpy(info->version, DRV_VERSION);
- sprintf(info->bus_info, "ISA 0x%lx", dev->base_addr);
-}
-
-static u32 netdev_get_msglevel(struct net_device *dev)
-{
- return corkscrew_debug;
-}
-
-static void netdev_set_msglevel(struct net_device *dev, u32 level)
-{
- corkscrew_debug = level;
-}
-
-static const struct ethtool_ops netdev_ethtool_ops = {
- .get_drvinfo = netdev_get_drvinfo,
- .get_msglevel = netdev_get_msglevel,
- .set_msglevel = netdev_set_msglevel,
-};
-
-
-#ifdef MODULE
-void cleanup_module(void)
-{
- while (!list_empty(&root_corkscrew_dev)) {
- struct net_device *dev;
- struct corkscrew_private *vp;
-
- vp = list_entry(root_corkscrew_dev.next,
- struct corkscrew_private, list);
- dev = vp->our_dev;
- unregister_netdev(dev);
- cleanup_card(dev);
- free_netdev(dev);
- }
-}
-#endif /* MODULE */
+++ /dev/null
-/* EtherLinkXL.c: A 3Com EtherLink PCI III/XL ethernet driver for linux. */
-/*
- Written 1996-1999 by Donald Becker.
-
- This software may be used and distributed according to the terms
- of the GNU General Public License, incorporated herein by reference.
-
- This driver is for the 3Com "Vortex" and "Boomerang" series ethercards.
- Members of the series include Fast EtherLink 3c590/3c592/3c595/3c597
- and the EtherLink XL 3c900 and 3c905 cards.
-
- Problem reports and questions should be directed to
- vortex@scyld.com
-
- The author may be reached as becker@scyld.com, or C/O
- Scyld Computing Corporation
- 410 Severn Ave., Suite 210
- Annapolis MD 21403
-
-*/
-
-/*
- * FIXME: This driver _could_ support MTU changing, but doesn't. See Don's hamachi.c implementation
- * as well as other drivers
- *
- * NOTE: If you make 'vortex_debug' a constant (#define vortex_debug 0) the driver shrinks by 2k
- * due to dead code elimination. There will be some performance benefits from this due to
- * elimination of all the tests and reduced cache footprint.
- */
-
-
-#define DRV_NAME "3c59x"
-
-
-
-/* A few values that may be tweaked. */
-/* Keep the ring sizes a power of two for efficiency. */
-#define TX_RING_SIZE 16
-#define RX_RING_SIZE 32
-#define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer.*/
-
-/* "Knobs" that adjust features and parameters. */
-/* Set the copy breakpoint for the copy-only-tiny-frames scheme.
- Setting to > 1512 effectively disables this feature. */
-#ifndef __arm__
-static int rx_copybreak = 200;
-#else
-/* ARM systems perform better by disregarding the bus-master
- transfer capability of these cards. -- rmk */
-static int rx_copybreak = 1513;
-#endif
-/* Allow setting MTU to a larger size, bypassing the normal ethernet setup. */
-static const int mtu = 1500;
-/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
-static int max_interrupt_work = 32;
-/* Tx timeout interval (millisecs) */
-static int watchdog = 5000;
-
-/* Allow aggregation of Tx interrupts. Saves CPU load at the cost
- * of possible Tx stalls if the system is blocking interrupts
- * somewhere else. Undefine this to disable.
- */
-#define tx_interrupt_mitigation 1
-
-/* Put out somewhat more debugging messages. (0: no msg, 1 minimal .. 6). */
-#define vortex_debug debug
-#ifdef VORTEX_DEBUG
-static int vortex_debug = VORTEX_DEBUG;
-#else
-static int vortex_debug = 1;
-#endif
-
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/string.h>
-#include <linux/timer.h>
-#include <linux/errno.h>
-#include <linux/in.h>
-#include <linux/ioport.h>
-#include <linux/interrupt.h>
-#include <linux/pci.h>
-#include <linux/mii.h>
-#include <linux/init.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/skbuff.h>
-#include <linux/ethtool.h>
-#include <linux/highmem.h>
-#include <linux/eisa.h>
-#include <linux/bitops.h>
-#include <linux/jiffies.h>
-#include <linux/gfp.h>
-#include <asm/irq.h> /* For nr_irqs only. */
-#include <asm/io.h>
-#include <asm/uaccess.h>
-
-/* Kernel compatibility defines, some common to David Hinds' PCMCIA package.
- This is only in the support-all-kernels source code. */
-
-#define RUN_AT(x) (jiffies + (x))
-
-#include <linux/delay.h>
-
-
-static const char version[] __devinitconst =
- DRV_NAME ": Donald Becker and others.\n";
-
-MODULE_AUTHOR("Donald Becker <becker@scyld.com>");
-MODULE_DESCRIPTION("3Com 3c59x/3c9xx ethernet driver ");
-MODULE_LICENSE("GPL");
-
-
-/* Operational parameter that usually are not changed. */
-
-/* The Vortex size is twice that of the original EtherLinkIII series: the
- runtime register window, window 1, is now always mapped in.
- The Boomerang size is twice as large as the Vortex -- it has additional
- bus master control registers. */
-#define VORTEX_TOTAL_SIZE 0x20
-#define BOOMERANG_TOTAL_SIZE 0x40
-
-/* Set iff a MII transceiver on any interface requires mdio preamble.
- This only set with the original DP83840 on older 3c905 boards, so the extra
- code size of a per-interface flag is not worthwhile. */
-static char mii_preamble_required;
-
-#define PFX DRV_NAME ": "
-
-
-
-/*
- Theory of Operation
-
-I. Board Compatibility
-
-This device driver is designed for the 3Com FastEtherLink and FastEtherLink
-XL, 3Com's PCI to 10/100baseT adapters. It also works with the 10Mbs
-versions of the FastEtherLink cards. The supported product IDs are
- 3c590, 3c592, 3c595, 3c597, 3c900, 3c905
-
-The related ISA 3c515 is supported with a separate driver, 3c515.c, included
-with the kernel source or available from
- cesdis.gsfc.nasa.gov:/pub/linux/drivers/3c515.html
-
-II. Board-specific settings
-
-PCI bus devices are configured by the system at boot time, so no jumpers
-need to be set on the board. The system BIOS should be set to assign the
-PCI INTA signal to an otherwise unused system IRQ line.
-
-The EEPROM settings for media type and forced-full-duplex are observed.
-The EEPROM media type should be left at the default "autoselect" unless using
-10base2 or AUI connections which cannot be reliably detected.
-
-III. Driver operation
-
-The 3c59x series use an interface that's very similar to the previous 3c5x9
-series. The primary interface is two programmed-I/O FIFOs, with an
-alternate single-contiguous-region bus-master transfer (see next).
-
-The 3c900 "Boomerang" series uses a full-bus-master interface with separate
-lists of transmit and receive descriptors, similar to the AMD LANCE/PCnet,
-DEC Tulip and Intel Speedo3. The first chip version retains a compatible
-programmed-I/O interface that has been removed in 'B' and subsequent board
-revisions.
-
-One extension that is advertised in a very large font is that the adapters
-are capable of being bus masters. On the Vortex chip this capability was
-only for a single contiguous region making it far less useful than the full
-bus master capability. There is a significant performance impact of taking
-an extra interrupt or polling for the completion of each transfer, as well
-as difficulty sharing the single transfer engine between the transmit and
-receive threads. Using DMA transfers is a win only with large blocks or
-with the flawed versions of the Intel Orion motherboard PCI controller.
-
-The Boomerang chip's full-bus-master interface is useful, and has the
-currently-unused advantages over other similar chips that queued transmit
-packets may be reordered and receive buffer groups are associated with a
-single frame.
-
-With full-bus-master support, this driver uses a "RX_COPYBREAK" scheme.
-Rather than a fixed intermediate receive buffer, this scheme allocates
-full-sized skbuffs as receive buffers. The value RX_COPYBREAK is used as
-the copying breakpoint: it is chosen to trade-off the memory wasted by
-passing the full-sized skbuff to the queue layer for all frames vs. the
-copying cost of copying a frame to a correctly-sized skbuff.
-
-IIIC. Synchronization
-The driver runs as two independent, single-threaded flows of control. One
-is the send-packet routine, which enforces single-threaded use by the
-dev->tbusy flag. The other thread is the interrupt handler, which is single
-threaded by the hardware and other software.
-
-IV. Notes
-
-Thanks to Cameron Spitzer and Terry Murphy of 3Com for providing development
-3c590, 3c595, and 3c900 boards.
-The name "Vortex" is the internal 3Com project name for the PCI ASIC, and
-the EISA version is called "Demon". According to Terry these names come
-from rides at the local amusement park.
-
-The new chips support both ethernet (1.5K) and FDDI (4.5K) packet sizes!
-This driver only supports ethernet packets because of the skbuff allocation
-limit of 4K.
-*/
-
-/* This table drives the PCI probe routines. It's mostly boilerplate in all
- of the drivers, and will likely be provided by some future kernel.
-*/
-enum pci_flags_bit {
- PCI_USES_MASTER=4,
-};
-
-enum { IS_VORTEX=1, IS_BOOMERANG=2, IS_CYCLONE=4, IS_TORNADO=8,
- EEPROM_8BIT=0x10, /* AKPM: Uses 0x230 as the base bitmaps for EEPROM reads */
- HAS_PWR_CTRL=0x20, HAS_MII=0x40, HAS_NWAY=0x80, HAS_CB_FNS=0x100,
- INVERT_MII_PWR=0x200, INVERT_LED_PWR=0x400, MAX_COLLISION_RESET=0x800,
- EEPROM_OFFSET=0x1000, HAS_HWCKSM=0x2000, WNO_XCVR_PWR=0x4000,
- EXTRA_PREAMBLE=0x8000, EEPROM_RESET=0x10000, };
-
-enum vortex_chips {
- CH_3C590 = 0,
- CH_3C592,
- CH_3C597,
- CH_3C595_1,
- CH_3C595_2,
-
- CH_3C595_3,
- CH_3C900_1,
- CH_3C900_2,
- CH_3C900_3,
- CH_3C900_4,
-
- CH_3C900_5,
- CH_3C900B_FL,
- CH_3C905_1,
- CH_3C905_2,
- CH_3C905B_TX,
- CH_3C905B_1,
-
- CH_3C905B_2,
- CH_3C905B_FX,
- CH_3C905C,
- CH_3C9202,
- CH_3C980,
- CH_3C9805,
-
- CH_3CSOHO100_TX,
- CH_3C555,
- CH_3C556,
- CH_3C556B,
- CH_3C575,
-
- CH_3C575_1,
- CH_3CCFE575,
- CH_3CCFE575CT,
- CH_3CCFE656,
- CH_3CCFEM656,
-
- CH_3CCFEM656_1,
- CH_3C450,
- CH_3C920,
- CH_3C982A,
- CH_3C982B,
-
- CH_905BT4,
- CH_920B_EMB_WNM,
-};
-
-
-/* note: this array directly indexed by above enums, and MUST
- * be kept in sync with both the enums above, and the PCI device
- * table below
- */
-static struct vortex_chip_info {
- const char *name;
- int flags;
- int drv_flags;
- int io_size;
-} vortex_info_tbl[] __devinitdata = {
- {"3c590 Vortex 10Mbps",
- PCI_USES_MASTER, IS_VORTEX, 32, },
- {"3c592 EISA 10Mbps Demon/Vortex", /* AKPM: from Don's 3c59x_cb.c 0.49H */
- PCI_USES_MASTER, IS_VORTEX, 32, },
- {"3c597 EISA Fast Demon/Vortex", /* AKPM: from Don's 3c59x_cb.c 0.49H */
- PCI_USES_MASTER, IS_VORTEX, 32, },
- {"3c595 Vortex 100baseTx",
- PCI_USES_MASTER, IS_VORTEX, 32, },
- {"3c595 Vortex 100baseT4",
- PCI_USES_MASTER, IS_VORTEX, 32, },
-
- {"3c595 Vortex 100base-MII",
- PCI_USES_MASTER, IS_VORTEX, 32, },
- {"3c900 Boomerang 10baseT",
- PCI_USES_MASTER, IS_BOOMERANG|EEPROM_RESET, 64, },
- {"3c900 Boomerang 10Mbps Combo",
- PCI_USES_MASTER, IS_BOOMERANG|EEPROM_RESET, 64, },
- {"3c900 Cyclone 10Mbps TPO", /* AKPM: from Don's 0.99M */
- PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
- {"3c900 Cyclone 10Mbps Combo",
- PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
-
- {"3c900 Cyclone 10Mbps TPC", /* AKPM: from Don's 0.99M */
- PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
- {"3c900B-FL Cyclone 10base-FL",
- PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
- {"3c905 Boomerang 100baseTx",
- PCI_USES_MASTER, IS_BOOMERANG|HAS_MII|EEPROM_RESET, 64, },
- {"3c905 Boomerang 100baseT4",
- PCI_USES_MASTER, IS_BOOMERANG|HAS_MII|EEPROM_RESET, 64, },
- {"3C905B-TX Fast Etherlink XL PCI",
- PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
- {"3c905B Cyclone 100baseTx",
- PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
-
- {"3c905B Cyclone 10/100/BNC",
- PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM, 128, },
- {"3c905B-FX Cyclone 100baseFx",
- PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
- {"3c905C Tornado",
- PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
- {"3c920B-EMB-WNM (ATI Radeon 9100 IGP)",
- PCI_USES_MASTER, IS_TORNADO|HAS_MII|HAS_HWCKSM, 128, },
- {"3c980 Cyclone",
- PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
-
- {"3c980C Python-T",
- PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM, 128, },
- {"3cSOHO100-TX Hurricane",
- PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
- {"3c555 Laptop Hurricane",
- PCI_USES_MASTER, IS_CYCLONE|EEPROM_8BIT|HAS_HWCKSM, 128, },
- {"3c556 Laptop Tornado",
- PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|EEPROM_8BIT|HAS_CB_FNS|INVERT_MII_PWR|
- HAS_HWCKSM, 128, },
- {"3c556B Laptop Hurricane",
- PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|EEPROM_OFFSET|HAS_CB_FNS|INVERT_MII_PWR|
- WNO_XCVR_PWR|HAS_HWCKSM, 128, },
-
- {"3c575 [Megahertz] 10/100 LAN CardBus",
- PCI_USES_MASTER, IS_BOOMERANG|HAS_MII|EEPROM_8BIT, 128, },
- {"3c575 Boomerang CardBus",
- PCI_USES_MASTER, IS_BOOMERANG|HAS_MII|EEPROM_8BIT, 128, },
- {"3CCFE575BT Cyclone CardBus",
- PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|
- INVERT_LED_PWR|HAS_HWCKSM, 128, },
- {"3CCFE575CT Tornado CardBus",
- PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|
- MAX_COLLISION_RESET|HAS_HWCKSM, 128, },
- {"3CCFE656 Cyclone CardBus",
- PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|
- INVERT_LED_PWR|HAS_HWCKSM, 128, },
-
- {"3CCFEM656B Cyclone+Winmodem CardBus",
- PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|
- INVERT_LED_PWR|HAS_HWCKSM, 128, },
- {"3CXFEM656C Tornado+Winmodem CardBus", /* From pcmcia-cs-3.1.5 */
- PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|
- MAX_COLLISION_RESET|HAS_HWCKSM, 128, },
- {"3c450 HomePNA Tornado", /* AKPM: from Don's 0.99Q */
- PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_HWCKSM, 128, },
- {"3c920 Tornado",
- PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_HWCKSM, 128, },
- {"3c982 Hydra Dual Port A",
- PCI_USES_MASTER, IS_TORNADO|HAS_HWCKSM|HAS_NWAY, 128, },
-
- {"3c982 Hydra Dual Port B",
- PCI_USES_MASTER, IS_TORNADO|HAS_HWCKSM|HAS_NWAY, 128, },
- {"3c905B-T4",
- PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
- {"3c920B-EMB-WNM Tornado",
- PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_HWCKSM, 128, },
-
- {NULL,}, /* NULL terminated list. */
-};
-
-
-static DEFINE_PCI_DEVICE_TABLE(vortex_pci_tbl) = {
- { 0x10B7, 0x5900, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C590 },
- { 0x10B7, 0x5920, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C592 },
- { 0x10B7, 0x5970, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C597 },
- { 0x10B7, 0x5950, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C595_1 },
- { 0x10B7, 0x5951, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C595_2 },
-
- { 0x10B7, 0x5952, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C595_3 },
- { 0x10B7, 0x9000, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_1 },
- { 0x10B7, 0x9001, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_2 },
- { 0x10B7, 0x9004, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_3 },
- { 0x10B7, 0x9005, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_4 },
-
- { 0x10B7, 0x9006, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_5 },
- { 0x10B7, 0x900A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900B_FL },
- { 0x10B7, 0x9050, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905_1 },
- { 0x10B7, 0x9051, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905_2 },
- { 0x10B7, 0x9054, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_TX },
- { 0x10B7, 0x9055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_1 },
-
- { 0x10B7, 0x9058, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_2 },
- { 0x10B7, 0x905A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_FX },
- { 0x10B7, 0x9200, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905C },
- { 0x10B7, 0x9202, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C9202 },
- { 0x10B7, 0x9800, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C980 },
- { 0x10B7, 0x9805, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C9805 },
-
- { 0x10B7, 0x7646, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CSOHO100_TX },
- { 0x10B7, 0x5055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C555 },
- { 0x10B7, 0x6055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C556 },
- { 0x10B7, 0x6056, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C556B },
- { 0x10B7, 0x5b57, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C575 },
-
- { 0x10B7, 0x5057, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C575_1 },
- { 0x10B7, 0x5157, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFE575 },
- { 0x10B7, 0x5257, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFE575CT },
- { 0x10B7, 0x6560, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFE656 },
- { 0x10B7, 0x6562, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFEM656 },
-
- { 0x10B7, 0x6564, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFEM656_1 },
- { 0x10B7, 0x4500, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C450 },
- { 0x10B7, 0x9201, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C920 },
- { 0x10B7, 0x1201, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C982A },
- { 0x10B7, 0x1202, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C982B },
-
- { 0x10B7, 0x9056, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_905BT4 },
- { 0x10B7, 0x9210, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_920B_EMB_WNM },
-
- {0,} /* 0 terminated list. */
-};
-MODULE_DEVICE_TABLE(pci, vortex_pci_tbl);
-
-
-/* Operational definitions.
- These are not used by other compilation units and thus are not
- exported in a ".h" file.
-
- First the windows. There are eight register windows, with the command
- and status registers available in each.
- */
-#define EL3_CMD 0x0e
-#define EL3_STATUS 0x0e
-
-/* The top five bits written to EL3_CMD are a command, the lower
- 11 bits are the parameter, if applicable.
- Note that 11 parameters bits was fine for ethernet, but the new chip
- can handle FDDI length frames (~4500 octets) and now parameters count
- 32-bit 'Dwords' rather than octets. */
-
-enum vortex_cmd {
- TotalReset = 0<<11, SelectWindow = 1<<11, StartCoax = 2<<11,
- RxDisable = 3<<11, RxEnable = 4<<11, RxReset = 5<<11,
- UpStall = 6<<11, UpUnstall = (6<<11)+1,
- DownStall = (6<<11)+2, DownUnstall = (6<<11)+3,
- RxDiscard = 8<<11, TxEnable = 9<<11, TxDisable = 10<<11, TxReset = 11<<11,
- FakeIntr = 12<<11, AckIntr = 13<<11, SetIntrEnb = 14<<11,
- SetStatusEnb = 15<<11, SetRxFilter = 16<<11, SetRxThreshold = 17<<11,
- SetTxThreshold = 18<<11, SetTxStart = 19<<11,
- StartDMAUp = 20<<11, StartDMADown = (20<<11)+1, StatsEnable = 21<<11,
- StatsDisable = 22<<11, StopCoax = 23<<11, SetFilterBit = 25<<11,};
-
-/* The SetRxFilter command accepts the following classes: */
-enum RxFilter {
- RxStation = 1, RxMulticast = 2, RxBroadcast = 4, RxProm = 8 };
-
-/* Bits in the general status register. */
-enum vortex_status {
- IntLatch = 0x0001, HostError = 0x0002, TxComplete = 0x0004,
- TxAvailable = 0x0008, RxComplete = 0x0010, RxEarly = 0x0020,
- IntReq = 0x0040, StatsFull = 0x0080,
- DMADone = 1<<8, DownComplete = 1<<9, UpComplete = 1<<10,
- DMAInProgress = 1<<11, /* DMA controller is still busy.*/
- CmdInProgress = 1<<12, /* EL3_CMD is still busy.*/
-};
-
-/* Register window 1 offsets, the window used in normal operation.
- On the Vortex this window is always mapped at offsets 0x10-0x1f. */
-enum Window1 {
- TX_FIFO = 0x10, RX_FIFO = 0x10, RxErrors = 0x14,
- RxStatus = 0x18, Timer=0x1A, TxStatus = 0x1B,
- TxFree = 0x1C, /* Remaining free bytes in Tx buffer. */
-};
-enum Window0 {
- Wn0EepromCmd = 10, /* Window 0: EEPROM command register. */
- Wn0EepromData = 12, /* Window 0: EEPROM results register. */
- IntrStatus=0x0E, /* Valid in all windows. */
-};
-enum Win0_EEPROM_bits {
- EEPROM_Read = 0x80, EEPROM_WRITE = 0x40, EEPROM_ERASE = 0xC0,
- EEPROM_EWENB = 0x30, /* Enable erasing/writing for 10 msec. */
- EEPROM_EWDIS = 0x00, /* Disable EWENB before 10 msec timeout. */
-};
-/* EEPROM locations. */
-enum eeprom_offset {
- PhysAddr01=0, PhysAddr23=1, PhysAddr45=2, ModelID=3,
- EtherLink3ID=7, IFXcvrIO=8, IRQLine=9,
- NodeAddr01=10, NodeAddr23=11, NodeAddr45=12,
- DriverTune=13, Checksum=15};
-
-enum Window2 { /* Window 2. */
- Wn2_ResetOptions=12,
-};
-enum Window3 { /* Window 3: MAC/config bits. */
- Wn3_Config=0, Wn3_MaxPktSize=4, Wn3_MAC_Ctrl=6, Wn3_Options=8,
-};
-
-#define BFEXT(value, offset, bitcount) \
- ((((unsigned long)(value)) >> (offset)) & ((1 << (bitcount)) - 1))
-
-#define BFINS(lhs, rhs, offset, bitcount) \
- (((lhs) & ~((((1 << (bitcount)) - 1)) << (offset))) | \
- (((rhs) & ((1 << (bitcount)) - 1)) << (offset)))
-
-#define RAM_SIZE(v) BFEXT(v, 0, 3)
-#define RAM_WIDTH(v) BFEXT(v, 3, 1)
-#define RAM_SPEED(v) BFEXT(v, 4, 2)
-#define ROM_SIZE(v) BFEXT(v, 6, 2)
-#define RAM_SPLIT(v) BFEXT(v, 16, 2)
-#define XCVR(v) BFEXT(v, 20, 4)
-#define AUTOSELECT(v) BFEXT(v, 24, 1)
-
-enum Window4 { /* Window 4: Xcvr/media bits. */
- Wn4_FIFODiag = 4, Wn4_NetDiag = 6, Wn4_PhysicalMgmt=8, Wn4_Media = 10,
-};
-enum Win4_Media_bits {
- Media_SQE = 0x0008, /* Enable SQE error counting for AUI. */
- Media_10TP = 0x00C0, /* Enable link beat and jabber for 10baseT. */
- Media_Lnk = 0x0080, /* Enable just link beat for 100TX/100FX. */
- Media_LnkBeat = 0x0800,
-};
-enum Window7 { /* Window 7: Bus Master control. */
- Wn7_MasterAddr = 0, Wn7_VlanEtherType=4, Wn7_MasterLen = 6,
- Wn7_MasterStatus = 12,
-};
-/* Boomerang bus master control registers. */
-enum MasterCtrl {
- PktStatus = 0x20, DownListPtr = 0x24, FragAddr = 0x28, FragLen = 0x2c,
- TxFreeThreshold = 0x2f, UpPktStatus = 0x30, UpListPtr = 0x38,
-};
-
-/* The Rx and Tx descriptor lists.
- Caution Alpha hackers: these types are 32 bits! Note also the 8 byte
- alignment contraint on tx_ring[] and rx_ring[]. */
-#define LAST_FRAG 0x80000000 /* Last Addr/Len pair in descriptor. */
-#define DN_COMPLETE 0x00010000 /* This packet has been downloaded */
-struct boom_rx_desc {
- __le32 next; /* Last entry points to 0. */
- __le32 status;
- __le32 addr; /* Up to 63 addr/len pairs possible. */
- __le32 length; /* Set LAST_FRAG to indicate last pair. */
-};
-/* Values for the Rx status entry. */
-enum rx_desc_status {
- RxDComplete=0x00008000, RxDError=0x4000,
- /* See boomerang_rx() for actual error bits */
- IPChksumErr=1<<25, TCPChksumErr=1<<26, UDPChksumErr=1<<27,
- IPChksumValid=1<<29, TCPChksumValid=1<<30, UDPChksumValid=1<<31,
-};
-
-#ifdef MAX_SKB_FRAGS
-#define DO_ZEROCOPY 1
-#else
-#define DO_ZEROCOPY 0
-#endif
-
-struct boom_tx_desc {
- __le32 next; /* Last entry points to 0. */
- __le32 status; /* bits 0:12 length, others see below. */
-#if DO_ZEROCOPY
- struct {
- __le32 addr;
- __le32 length;
- } frag[1+MAX_SKB_FRAGS];
-#else
- __le32 addr;
- __le32 length;
-#endif
-};
-
-/* Values for the Tx status entry. */
-enum tx_desc_status {
- CRCDisable=0x2000, TxDComplete=0x8000,
- AddIPChksum=0x02000000, AddTCPChksum=0x04000000, AddUDPChksum=0x08000000,
- TxIntrUploaded=0x80000000, /* IRQ when in FIFO, but maybe not sent. */
-};
-
-/* Chip features we care about in vp->capabilities, read from the EEPROM. */
-enum ChipCaps { CapBusMaster=0x20, CapPwrMgmt=0x2000 };
-
-struct vortex_extra_stats {
- unsigned long tx_deferred;
- unsigned long tx_max_collisions;
- unsigned long tx_multiple_collisions;
- unsigned long tx_single_collisions;
- unsigned long rx_bad_ssd;
-};
-
-struct vortex_private {
- /* The Rx and Tx rings should be quad-word-aligned. */
- struct boom_rx_desc* rx_ring;
- struct boom_tx_desc* tx_ring;
- dma_addr_t rx_ring_dma;
- dma_addr_t tx_ring_dma;
- /* The addresses of transmit- and receive-in-place skbuffs. */
- struct sk_buff* rx_skbuff[RX_RING_SIZE];
- struct sk_buff* tx_skbuff[TX_RING_SIZE];
- unsigned int cur_rx, cur_tx; /* The next free ring entry */
- unsigned int dirty_rx, dirty_tx; /* The ring entries to be free()ed. */
- struct vortex_extra_stats xstats; /* NIC-specific extra stats */
- struct sk_buff *tx_skb; /* Packet being eaten by bus master ctrl. */
- dma_addr_t tx_skb_dma; /* Allocated DMA address for bus master ctrl DMA. */
-
- /* PCI configuration space information. */
- struct device *gendev;
- void __iomem *ioaddr; /* IO address space */
- void __iomem *cb_fn_base; /* CardBus function status addr space. */
-
- /* Some values here only for performance evaluation and path-coverage */
- int rx_nocopy, rx_copy, queued_packet, rx_csumhits;
- int card_idx;
-
- /* The remainder are related to chip state, mostly media selection. */
- struct timer_list timer; /* Media selection timer. */
- struct timer_list rx_oom_timer; /* Rx skb allocation retry timer */
- int options; /* User-settable misc. driver options. */
- unsigned int media_override:4, /* Passed-in media type. */
- default_media:4, /* Read from the EEPROM/Wn3_Config. */
- full_duplex:1, autoselect:1,
- bus_master:1, /* Vortex can only do a fragment bus-m. */
- full_bus_master_tx:1, full_bus_master_rx:2, /* Boomerang */
- flow_ctrl:1, /* Use 802.3x flow control (PAUSE only) */
- partner_flow_ctrl:1, /* Partner supports flow control */
- has_nway:1,
- enable_wol:1, /* Wake-on-LAN is enabled */
- pm_state_valid:1, /* pci_dev->saved_config_space has sane contents */
- open:1,
- medialock:1,
- must_free_region:1, /* Flag: if zero, Cardbus owns the I/O region */
- large_frames:1, /* accept large frames */
- handling_irq:1; /* private in_irq indicator */
- /* {get|set}_wol operations are already serialized by rtnl.
- * no additional locking is required for the enable_wol and acpi_set_WOL()
- */
- int drv_flags;
- u16 status_enable;
- u16 intr_enable;
- u16 available_media; /* From Wn3_Options. */
- u16 capabilities, info1, info2; /* Various, from EEPROM. */
- u16 advertising; /* NWay media advertisement */
- unsigned char phys[2]; /* MII device addresses. */
- u16 deferred; /* Resend these interrupts when we
- * bale from the ISR */
- u16 io_size; /* Size of PCI region (for release_region) */
-
- /* Serialises access to hardware other than MII and variables below.
- * The lock hierarchy is rtnl_lock > {lock, mii_lock} > window_lock. */
- spinlock_t lock;
-
- spinlock_t mii_lock; /* Serialises access to MII */
- struct mii_if_info mii; /* MII lib hooks/info */
- spinlock_t window_lock; /* Serialises access to windowed regs */
- int window; /* Register window */
-};
-
-static void window_set(struct vortex_private *vp, int window)
-{
- if (window != vp->window) {
- iowrite16(SelectWindow + window, vp->ioaddr + EL3_CMD);
- vp->window = window;
- }
-}
-
-#define DEFINE_WINDOW_IO(size) \
-static u ## size \
-window_read ## size(struct vortex_private *vp, int window, int addr) \
-{ \
- unsigned long flags; \
- u ## size ret; \
- spin_lock_irqsave(&vp->window_lock, flags); \
- window_set(vp, window); \
- ret = ioread ## size(vp->ioaddr + addr); \
- spin_unlock_irqrestore(&vp->window_lock, flags); \
- return ret; \
-} \
-static void \
-window_write ## size(struct vortex_private *vp, u ## size value, \
- int window, int addr) \
-{ \
- unsigned long flags; \
- spin_lock_irqsave(&vp->window_lock, flags); \
- window_set(vp, window); \
- iowrite ## size(value, vp->ioaddr + addr); \
- spin_unlock_irqrestore(&vp->window_lock, flags); \
-}
-DEFINE_WINDOW_IO(8)
-DEFINE_WINDOW_IO(16)
-DEFINE_WINDOW_IO(32)
-
-#ifdef CONFIG_PCI
-#define DEVICE_PCI(dev) (((dev)->bus == &pci_bus_type) ? to_pci_dev((dev)) : NULL)
-#else
-#define DEVICE_PCI(dev) NULL
-#endif
-
-#define VORTEX_PCI(vp) \
- ((struct pci_dev *) (((vp)->gendev) ? DEVICE_PCI((vp)->gendev) : NULL))
-
-#ifdef CONFIG_EISA
-#define DEVICE_EISA(dev) (((dev)->bus == &eisa_bus_type) ? to_eisa_device((dev)) : NULL)
-#else
-#define DEVICE_EISA(dev) NULL
-#endif
-
-#define VORTEX_EISA(vp) \
- ((struct eisa_device *) (((vp)->gendev) ? DEVICE_EISA((vp)->gendev) : NULL))
-
-/* The action to take with a media selection timer tick.
- Note that we deviate from the 3Com order by checking 10base2 before AUI.
- */
-enum xcvr_types {
- XCVR_10baseT=0, XCVR_AUI, XCVR_10baseTOnly, XCVR_10base2, XCVR_100baseTx,
- XCVR_100baseFx, XCVR_MII=6, XCVR_NWAY=8, XCVR_ExtMII=9, XCVR_Default=10,
-};
-
-static const struct media_table {
- char *name;
- unsigned int media_bits:16, /* Bits to set in Wn4_Media register. */
- mask:8, /* The transceiver-present bit in Wn3_Config.*/
- next:8; /* The media type to try next. */
- int wait; /* Time before we check media status. */
-} media_tbl[] = {
- { "10baseT", Media_10TP,0x08, XCVR_10base2, (14*HZ)/10},
- { "10Mbs AUI", Media_SQE, 0x20, XCVR_Default, (1*HZ)/10},
- { "undefined", 0, 0x80, XCVR_10baseT, 10000},
- { "10base2", 0, 0x10, XCVR_AUI, (1*HZ)/10},
- { "100baseTX", Media_Lnk, 0x02, XCVR_100baseFx, (14*HZ)/10},
- { "100baseFX", Media_Lnk, 0x04, XCVR_MII, (14*HZ)/10},
- { "MII", 0, 0x41, XCVR_10baseT, 3*HZ },
- { "undefined", 0, 0x01, XCVR_10baseT, 10000},
- { "Autonegotiate", 0, 0x41, XCVR_10baseT, 3*HZ},
- { "MII-External", 0, 0x41, XCVR_10baseT, 3*HZ },
- { "Default", 0, 0xFF, XCVR_10baseT, 10000},
-};
-
-static struct {
- const char str[ETH_GSTRING_LEN];
-} ethtool_stats_keys[] = {
- { "tx_deferred" },
- { "tx_max_collisions" },
- { "tx_multiple_collisions" },
- { "tx_single_collisions" },
- { "rx_bad_ssd" },
-};
-
-/* number of ETHTOOL_GSTATS u64's */
-#define VORTEX_NUM_STATS 5
-
-static int vortex_probe1(struct device *gendev, void __iomem *ioaddr, int irq,
- int chip_idx, int card_idx);
-static int vortex_up(struct net_device *dev);
-static void vortex_down(struct net_device *dev, int final);
-static int vortex_open(struct net_device *dev);
-static void mdio_sync(struct vortex_private *vp, int bits);
-static int mdio_read(struct net_device *dev, int phy_id, int location);
-static void mdio_write(struct net_device *vp, int phy_id, int location, int value);
-static void vortex_timer(unsigned long arg);
-static void rx_oom_timer(unsigned long arg);
-static netdev_tx_t vortex_start_xmit(struct sk_buff *skb,
- struct net_device *dev);
-static netdev_tx_t boomerang_start_xmit(struct sk_buff *skb,
- struct net_device *dev);
-static int vortex_rx(struct net_device *dev);
-static int boomerang_rx(struct net_device *dev);
-static irqreturn_t vortex_interrupt(int irq, void *dev_id);
-static irqreturn_t boomerang_interrupt(int irq, void *dev_id);
-static int vortex_close(struct net_device *dev);
-static void dump_tx_ring(struct net_device *dev);
-static void update_stats(void __iomem *ioaddr, struct net_device *dev);
-static struct net_device_stats *vortex_get_stats(struct net_device *dev);
-static void set_rx_mode(struct net_device *dev);
-#ifdef CONFIG_PCI
-static int vortex_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
-#endif
-static void vortex_tx_timeout(struct net_device *dev);
-static void acpi_set_WOL(struct net_device *dev);
-static const struct ethtool_ops vortex_ethtool_ops;
-static void set_8021q_mode(struct net_device *dev, int enable);
-
-/* This driver uses 'options' to pass the media type, full-duplex flag, etc. */
-/* Option count limit only -- unlimited interfaces are supported. */
-#define MAX_UNITS 8
-static int options[MAX_UNITS] = { [0 ... MAX_UNITS-1] = -1 };
-static int full_duplex[MAX_UNITS] = {[0 ... MAX_UNITS-1] = -1 };
-static int hw_checksums[MAX_UNITS] = {[0 ... MAX_UNITS-1] = -1 };
-static int flow_ctrl[MAX_UNITS] = {[0 ... MAX_UNITS-1] = -1 };
-static int enable_wol[MAX_UNITS] = {[0 ... MAX_UNITS-1] = -1 };
-static int use_mmio[MAX_UNITS] = {[0 ... MAX_UNITS-1] = -1 };
-static int global_options = -1;
-static int global_full_duplex = -1;
-static int global_enable_wol = -1;
-static int global_use_mmio = -1;
-
-/* Variables to work-around the Compaq PCI BIOS32 problem. */
-static int compaq_ioaddr, compaq_irq, compaq_device_id = 0x5900;
-static struct net_device *compaq_net_device;
-
-static int vortex_cards_found;
-
-module_param(debug, int, 0);
-module_param(global_options, int, 0);
-module_param_array(options, int, NULL, 0);
-module_param(global_full_duplex, int, 0);
-module_param_array(full_duplex, int, NULL, 0);
-module_param_array(hw_checksums, int, NULL, 0);
-module_param_array(flow_ctrl, int, NULL, 0);
-module_param(global_enable_wol, int, 0);
-module_param_array(enable_wol, int, NULL, 0);
-module_param(rx_copybreak, int, 0);
-module_param(max_interrupt_work, int, 0);
-module_param(compaq_ioaddr, int, 0);
-module_param(compaq_irq, int, 0);
-module_param(compaq_device_id, int, 0);
-module_param(watchdog, int, 0);
-module_param(global_use_mmio, int, 0);
-module_param_array(use_mmio, int, NULL, 0);
-MODULE_PARM_DESC(debug, "3c59x debug level (0-6)");
-MODULE_PARM_DESC(options, "3c59x: Bits 0-3: media type, bit 4: bus mastering, bit 9: full duplex");
-MODULE_PARM_DESC(global_options, "3c59x: same as options, but applies to all NICs if options is unset");
-MODULE_PARM_DESC(full_duplex, "3c59x full duplex setting(s) (1)");
-MODULE_PARM_DESC(global_full_duplex, "3c59x: same as full_duplex, but applies to all NICs if full_duplex is unset");
-MODULE_PARM_DESC(hw_checksums, "3c59x Hardware checksum checking by adapter(s) (0-1)");
-MODULE_PARM_DESC(flow_ctrl, "3c59x 802.3x flow control usage (PAUSE only) (0-1)");
-MODULE_PARM_DESC(enable_wol, "3c59x: Turn on Wake-on-LAN for adapter(s) (0-1)");
-MODULE_PARM_DESC(global_enable_wol, "3c59x: same as enable_wol, but applies to all NICs if enable_wol is unset");
-MODULE_PARM_DESC(rx_copybreak, "3c59x copy breakpoint for copy-only-tiny-frames");
-MODULE_PARM_DESC(max_interrupt_work, "3c59x maximum events handled per interrupt");
-MODULE_PARM_DESC(compaq_ioaddr, "3c59x PCI I/O base address (Compaq BIOS problem workaround)");
-MODULE_PARM_DESC(compaq_irq, "3c59x PCI IRQ number (Compaq BIOS problem workaround)");
-MODULE_PARM_DESC(compaq_device_id, "3c59x PCI device ID (Compaq BIOS problem workaround)");
-MODULE_PARM_DESC(watchdog, "3c59x transmit timeout in milliseconds");
-MODULE_PARM_DESC(global_use_mmio, "3c59x: same as use_mmio, but applies to all NICs if options is unset");
-MODULE_PARM_DESC(use_mmio, "3c59x: use memory-mapped PCI I/O resource (0-1)");
-
-#ifdef CONFIG_NET_POLL_CONTROLLER
-static void poll_vortex(struct net_device *dev)
-{
- struct vortex_private *vp = netdev_priv(dev);
- unsigned long flags;
- local_irq_save(flags);
- (vp->full_bus_master_rx ? boomerang_interrupt:vortex_interrupt)(dev->irq,dev);
- local_irq_restore(flags);
-}
-#endif
-
-#ifdef CONFIG_PM
-
-static int vortex_suspend(struct device *dev)
-{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct net_device *ndev = pci_get_drvdata(pdev);
-
- if (!ndev || !netif_running(ndev))
- return 0;
-
- netif_device_detach(ndev);
- vortex_down(ndev, 1);
-
- return 0;
-}
-
-static int vortex_resume(struct device *dev)
-{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct net_device *ndev = pci_get_drvdata(pdev);
- int err;
-
- if (!ndev || !netif_running(ndev))
- return 0;
-
- err = vortex_up(ndev);
- if (err)
- return err;
-
- netif_device_attach(ndev);
-
- return 0;
-}
-
-static const struct dev_pm_ops vortex_pm_ops = {
- .suspend = vortex_suspend,
- .resume = vortex_resume,
- .freeze = vortex_suspend,
- .thaw = vortex_resume,
- .poweroff = vortex_suspend,
- .restore = vortex_resume,
-};
-
-#define VORTEX_PM_OPS (&vortex_pm_ops)
-
-#else /* !CONFIG_PM */
-
-#define VORTEX_PM_OPS NULL
-
-#endif /* !CONFIG_PM */
-
-#ifdef CONFIG_EISA
-static struct eisa_device_id vortex_eisa_ids[] = {
- { "TCM5920", CH_3C592 },
- { "TCM5970", CH_3C597 },
- { "" }
-};
-MODULE_DEVICE_TABLE(eisa, vortex_eisa_ids);
-
-static int __init vortex_eisa_probe(struct device *device)
-{
- void __iomem *ioaddr;
- struct eisa_device *edev;
-
- edev = to_eisa_device(device);
-
- if (!request_region(edev->base_addr, VORTEX_TOTAL_SIZE, DRV_NAME))
- return -EBUSY;
-
- ioaddr = ioport_map(edev->base_addr, VORTEX_TOTAL_SIZE);
-
- if (vortex_probe1(device, ioaddr, ioread16(ioaddr + 0xC88) >> 12,
- edev->id.driver_data, vortex_cards_found)) {
- release_region(edev->base_addr, VORTEX_TOTAL_SIZE);
- return -ENODEV;
- }
-
- vortex_cards_found++;
-
- return 0;
-}
-
-static int __devexit vortex_eisa_remove(struct device *device)
-{
- struct eisa_device *edev;
- struct net_device *dev;
- struct vortex_private *vp;
- void __iomem *ioaddr;
-
- edev = to_eisa_device(device);
- dev = eisa_get_drvdata(edev);
-
- if (!dev) {
- pr_err("vortex_eisa_remove called for Compaq device!\n");
- BUG();
- }
-
- vp = netdev_priv(dev);
- ioaddr = vp->ioaddr;
-
- unregister_netdev(dev);
- iowrite16(TotalReset|0x14, ioaddr + EL3_CMD);
- release_region(dev->base_addr, VORTEX_TOTAL_SIZE);
-
- free_netdev(dev);
- return 0;
-}
-
-static struct eisa_driver vortex_eisa_driver = {
- .id_table = vortex_eisa_ids,
- .driver = {
- .name = "3c59x",
- .probe = vortex_eisa_probe,
- .remove = __devexit_p(vortex_eisa_remove)
- }
-};
-
-#endif /* CONFIG_EISA */
-
-/* returns count found (>= 0), or negative on error */
-static int __init vortex_eisa_init(void)
-{
- int eisa_found = 0;
- int orig_cards_found = vortex_cards_found;
-
-#ifdef CONFIG_EISA
- int err;
-
- err = eisa_driver_register (&vortex_eisa_driver);
- if (!err) {
- /*
- * Because of the way EISA bus is probed, we cannot assume
- * any device have been found when we exit from
- * eisa_driver_register (the bus root driver may not be
- * initialized yet). So we blindly assume something was
- * found, and let the sysfs magic happened...
- */
- eisa_found = 1;
- }
-#endif
-
- /* Special code to work-around the Compaq PCI BIOS32 problem. */
- if (compaq_ioaddr) {
- vortex_probe1(NULL, ioport_map(compaq_ioaddr, VORTEX_TOTAL_SIZE),
- compaq_irq, compaq_device_id, vortex_cards_found++);
- }
-
- return vortex_cards_found - orig_cards_found + eisa_found;
-}
-
-/* returns count (>= 0), or negative on error */
-static int __devinit vortex_init_one(struct pci_dev *pdev,
- const struct pci_device_id *ent)
-{
- int rc, unit, pci_bar;
- struct vortex_chip_info *vci;
- void __iomem *ioaddr;
-
- /* wake up and enable device */
- rc = pci_enable_device(pdev);
- if (rc < 0)
- goto out;
-
- unit = vortex_cards_found;
-
- if (global_use_mmio < 0 && (unit >= MAX_UNITS || use_mmio[unit] < 0)) {
- /* Determine the default if the user didn't override us */
- vci = &vortex_info_tbl[ent->driver_data];
- pci_bar = vci->drv_flags & (IS_CYCLONE | IS_TORNADO) ? 1 : 0;
- } else if (unit < MAX_UNITS && use_mmio[unit] >= 0)
- pci_bar = use_mmio[unit] ? 1 : 0;
- else
- pci_bar = global_use_mmio ? 1 : 0;
-
- ioaddr = pci_iomap(pdev, pci_bar, 0);
- if (!ioaddr) /* If mapping fails, fall-back to BAR 0... */
- ioaddr = pci_iomap(pdev, 0, 0);
- if (!ioaddr) {
- pci_disable_device(pdev);
- rc = -ENOMEM;
- goto out;
- }
-
- rc = vortex_probe1(&pdev->dev, ioaddr, pdev->irq,
- ent->driver_data, unit);
- if (rc < 0) {
- pci_iounmap(pdev, ioaddr);
- pci_disable_device(pdev);
- goto out;
- }
-
- vortex_cards_found++;
-
-out:
- return rc;
-}
-
-static const struct net_device_ops boomrang_netdev_ops = {
- .ndo_open = vortex_open,
- .ndo_stop = vortex_close,
- .ndo_start_xmit = boomerang_start_xmit,
- .ndo_tx_timeout = vortex_tx_timeout,
- .ndo_get_stats = vortex_get_stats,
-#ifdef CONFIG_PCI
- .ndo_do_ioctl = vortex_ioctl,
-#endif
- .ndo_set_multicast_list = set_rx_mode,
- .ndo_change_mtu = eth_change_mtu,
- .ndo_set_mac_address = eth_mac_addr,
- .ndo_validate_addr = eth_validate_addr,
-#ifdef CONFIG_NET_POLL_CONTROLLER
- .ndo_poll_controller = poll_vortex,
-#endif
-};
-
-static const struct net_device_ops vortex_netdev_ops = {
- .ndo_open = vortex_open,
- .ndo_stop = vortex_close,
- .ndo_start_xmit = vortex_start_xmit,
- .ndo_tx_timeout = vortex_tx_timeout,
- .ndo_get_stats = vortex_get_stats,
-#ifdef CONFIG_PCI
- .ndo_do_ioctl = vortex_ioctl,
-#endif
- .ndo_set_multicast_list = set_rx_mode,
- .ndo_change_mtu = eth_change_mtu,
- .ndo_set_mac_address = eth_mac_addr,
- .ndo_validate_addr = eth_validate_addr,
-#ifdef CONFIG_NET_POLL_CONTROLLER
- .ndo_poll_controller = poll_vortex,
-#endif
-};
-
-/*
- * Start up the PCI/EISA device which is described by *gendev.
- * Return 0 on success.
- *
- * NOTE: pdev can be NULL, for the case of a Compaq device
- */
-static int __devinit vortex_probe1(struct device *gendev,
- void __iomem *ioaddr, int irq,
- int chip_idx, int card_idx)
-{
- struct vortex_private *vp;
- int option;
- unsigned int eeprom[0x40], checksum = 0; /* EEPROM contents */
- int i, step;
- struct net_device *dev;
- static int printed_version;
- int retval, print_info;
- struct vortex_chip_info * const vci = &vortex_info_tbl[chip_idx];
- const char *print_name = "3c59x";
- struct pci_dev *pdev = NULL;
- struct eisa_device *edev = NULL;
-
- if (!printed_version) {
- pr_info("%s", version);
- printed_version = 1;
- }
-
- if (gendev) {
- if ((pdev = DEVICE_PCI(gendev))) {
- print_name = pci_name(pdev);
- }
-
- if ((edev = DEVICE_EISA(gendev))) {
- print_name = dev_name(&edev->dev);
- }
- }
-
- dev = alloc_etherdev(sizeof(*vp));
- retval = -ENOMEM;
- if (!dev) {
- pr_err(PFX "unable to allocate etherdev, aborting\n");
- goto out;
- }
- SET_NETDEV_DEV(dev, gendev);
- vp = netdev_priv(dev);
-
- option = global_options;
-
- /* The lower four bits are the media type. */
- if (dev->mem_start) {
- /*
- * The 'options' param is passed in as the third arg to the
- * LILO 'ether=' argument for non-modular use
- */
- option = dev->mem_start;
- }
- else if (card_idx < MAX_UNITS) {
- if (options[card_idx] >= 0)
- option = options[card_idx];
- }
-
- if (option > 0) {
- if (option & 0x8000)
- vortex_debug = 7;
- if (option & 0x4000)
- vortex_debug = 2;
- if (option & 0x0400)
- vp->enable_wol = 1;
- }
-
- print_info = (vortex_debug > 1);
- if (print_info)
- pr_info("See Documentation/networking/vortex.txt\n");
-
- pr_info("%s: 3Com %s %s at %p.\n",
- print_name,
- pdev ? "PCI" : "EISA",
- vci->name,
- ioaddr);
-
- dev->base_addr = (unsigned long)ioaddr;
- dev->irq = irq;
- dev->mtu = mtu;
- vp->ioaddr = ioaddr;
- vp->large_frames = mtu > 1500;
- vp->drv_flags = vci->drv_flags;
- vp->has_nway = (vci->drv_flags & HAS_NWAY) ? 1 : 0;
- vp->io_size = vci->io_size;
- vp->card_idx = card_idx;
- vp->window = -1;
-
- /* module list only for Compaq device */
- if (gendev == NULL) {
- compaq_net_device = dev;
- }
-
- /* PCI-only startup logic */
- if (pdev) {
- /* EISA resources already marked, so only PCI needs to do this here */
- /* Ignore return value, because Cardbus drivers already allocate for us */
- if (request_region(dev->base_addr, vci->io_size, print_name) != NULL)
- vp->must_free_region = 1;
-
- /* enable bus-mastering if necessary */
- if (vci->flags & PCI_USES_MASTER)
- pci_set_master(pdev);
-
- if (vci->drv_flags & IS_VORTEX) {
- u8 pci_latency;
- u8 new_latency = 248;
-
- /* Check the PCI latency value. On the 3c590 series the latency timer
- must be set to the maximum value to avoid data corruption that occurs
- when the timer expires during a transfer. This bug exists the Vortex
- chip only. */
- pci_read_config_byte(pdev, PCI_LATENCY_TIMER, &pci_latency);
- if (pci_latency < new_latency) {
- pr_info("%s: Overriding PCI latency timer (CFLT) setting of %d, new value is %d.\n",
- print_name, pci_latency, new_latency);
- pci_write_config_byte(pdev, PCI_LATENCY_TIMER, new_latency);
- }
- }
- }
-
- spin_lock_init(&vp->lock);
- spin_lock_init(&vp->mii_lock);
- spin_lock_init(&vp->window_lock);
- vp->gendev = gendev;
- vp->mii.dev = dev;
- vp->mii.mdio_read = mdio_read;
- vp->mii.mdio_write = mdio_write;
- vp->mii.phy_id_mask = 0x1f;
- vp->mii.reg_num_mask = 0x1f;
-
- /* Makes sure rings are at least 16 byte aligned. */
- vp->rx_ring = pci_alloc_consistent(pdev, sizeof(struct boom_rx_desc) * RX_RING_SIZE
- + sizeof(struct boom_tx_desc) * TX_RING_SIZE,
- &vp->rx_ring_dma);
- retval = -ENOMEM;
- if (!vp->rx_ring)
- goto free_region;
-
- vp->tx_ring = (struct boom_tx_desc *)(vp->rx_ring + RX_RING_SIZE);
- vp->tx_ring_dma = vp->rx_ring_dma + sizeof(struct boom_rx_desc) * RX_RING_SIZE;
-
- /* if we are a PCI driver, we store info in pdev->driver_data
- * instead of a module list */
- if (pdev)
- pci_set_drvdata(pdev, dev);
- if (edev)
- eisa_set_drvdata(edev, dev);
-
- vp->media_override = 7;
- if (option >= 0) {
- vp->media_override = ((option & 7) == 2) ? 0 : option & 15;
- if (vp->media_override != 7)
- vp->medialock = 1;
- vp->full_duplex = (option & 0x200) ? 1 : 0;
- vp->bus_master = (option & 16) ? 1 : 0;
- }
-
- if (global_full_duplex > 0)
- vp->full_duplex = 1;
- if (global_enable_wol > 0)
- vp->enable_wol = 1;
-
- if (card_idx < MAX_UNITS) {
- if (full_duplex[card_idx] > 0)
- vp->full_duplex = 1;
- if (flow_ctrl[card_idx] > 0)
- vp->flow_ctrl = 1;
- if (enable_wol[card_idx] > 0)
- vp->enable_wol = 1;
- }
-
- vp->mii.force_media = vp->full_duplex;
- vp->options = option;
- /* Read the station address from the EEPROM. */
- {
- int base;
-
- if (vci->drv_flags & EEPROM_8BIT)
- base = 0x230;
- else if (vci->drv_flags & EEPROM_OFFSET)
- base = EEPROM_Read + 0x30;
- else
- base = EEPROM_Read;
-
- for (i = 0; i < 0x40; i++) {
- int timer;
- window_write16(vp, base + i, 0, Wn0EepromCmd);
- /* Pause for at least 162 us. for the read to take place. */
- for (timer = 10; timer >= 0; timer--) {
- udelay(162);
- if ((window_read16(vp, 0, Wn0EepromCmd) &
- 0x8000) == 0)
- break;
- }
- eeprom[i] = window_read16(vp, 0, Wn0EepromData);
- }
- }
- for (i = 0; i < 0x18; i++)
- checksum ^= eeprom[i];
- checksum = (checksum ^ (checksum >> 8)) & 0xff;
- if (checksum != 0x00) { /* Grrr, needless incompatible change 3Com. */
- while (i < 0x21)
- checksum ^= eeprom[i++];
- checksum = (checksum ^ (checksum >> 8)) & 0xff;
- }
- if ((checksum != 0x00) && !(vci->drv_flags & IS_TORNADO))
- pr_cont(" ***INVALID CHECKSUM %4.4x*** ", checksum);
- for (i = 0; i < 3; i++)
- ((__be16 *)dev->dev_addr)[i] = htons(eeprom[i + 10]);
- memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len);
- if (print_info)
- pr_cont(" %pM", dev->dev_addr);
- /* Unfortunately an all zero eeprom passes the checksum and this
- gets found in the wild in failure cases. Crypto is hard 8) */
- if (!is_valid_ether_addr(dev->dev_addr)) {
- retval = -EINVAL;
- pr_err("*** EEPROM MAC address is invalid.\n");
- goto free_ring; /* With every pack */
- }
- for (i = 0; i < 6; i++)
- window_write8(vp, dev->dev_addr[i], 2, i);
-
- if (print_info)
- pr_cont(", IRQ %d\n", dev->irq);
- /* Tell them about an invalid IRQ. */
- if (dev->irq <= 0 || dev->irq >= nr_irqs)
- pr_warning(" *** Warning: IRQ %d is unlikely to work! ***\n",
- dev->irq);
-
- step = (window_read8(vp, 4, Wn4_NetDiag) & 0x1e) >> 1;
- if (print_info) {
- pr_info(" product code %02x%02x rev %02x.%d date %02d-%02d-%02d\n",
- eeprom[6]&0xff, eeprom[6]>>8, eeprom[0x14],
- step, (eeprom[4]>>5) & 15, eeprom[4] & 31, eeprom[4]>>9);
- }
-
-
- if (pdev && vci->drv_flags & HAS_CB_FNS) {
- unsigned short n;
-
- vp->cb_fn_base = pci_iomap(pdev, 2, 0);
- if (!vp->cb_fn_base) {
- retval = -ENOMEM;
- goto free_ring;
- }
-
- if (print_info) {
- pr_info("%s: CardBus functions mapped %16.16llx->%p\n",
- print_name,
- (unsigned long long)pci_resource_start(pdev, 2),
- vp->cb_fn_base);
- }
-
- n = window_read16(vp, 2, Wn2_ResetOptions) & ~0x4010;
- if (vp->drv_flags & INVERT_LED_PWR)
- n |= 0x10;
- if (vp->drv_flags & INVERT_MII_PWR)
- n |= 0x4000;
- window_write16(vp, n, 2, Wn2_ResetOptions);
- if (vp->drv_flags & WNO_XCVR_PWR) {
- window_write16(vp, 0x0800, 0, 0);
- }
- }
-
- /* Extract our information from the EEPROM data. */
- vp->info1 = eeprom[13];
- vp->info2 = eeprom[15];
- vp->capabilities = eeprom[16];
-
- if (vp->info1 & 0x8000) {
- vp->full_duplex = 1;
- if (print_info)
- pr_info("Full duplex capable\n");
- }
-
- {
- static const char * const ram_split[] = {"5:3", "3:1", "1:1", "3:5"};
- unsigned int config;
- vp->available_media = window_read16(vp, 3, Wn3_Options);
- if ((vp->available_media & 0xff) == 0) /* Broken 3c916 */
- vp->available_media = 0x40;
- config = window_read32(vp, 3, Wn3_Config);
- if (print_info) {
- pr_debug(" Internal config register is %4.4x, transceivers %#x.\n",
- config, window_read16(vp, 3, Wn3_Options));
- pr_info(" %dK %s-wide RAM %s Rx:Tx split, %s%s interface.\n",
- 8 << RAM_SIZE(config),
- RAM_WIDTH(config) ? "word" : "byte",
- ram_split[RAM_SPLIT(config)],
- AUTOSELECT(config) ? "autoselect/" : "",
- XCVR(config) > XCVR_ExtMII ? "<invalid transceiver>" :
- media_tbl[XCVR(config)].name);
- }
- vp->default_media = XCVR(config);
- if (vp->default_media == XCVR_NWAY)
- vp->has_nway = 1;
- vp->autoselect = AUTOSELECT(config);
- }
-
- if (vp->media_override != 7) {
- pr_info("%s: Media override to transceiver type %d (%s).\n",
- print_name, vp->media_override,
- media_tbl[vp->media_override].name);
- dev->if_port = vp->media_override;
- } else
- dev->if_port = vp->default_media;
-
- if ((vp->available_media & 0x40) || (vci->drv_flags & HAS_NWAY) ||
- dev->if_port == XCVR_MII || dev->if_port == XCVR_NWAY) {
- int phy, phy_idx = 0;
- mii_preamble_required++;
- if (vp->drv_flags & EXTRA_PREAMBLE)
- mii_preamble_required++;
- mdio_sync(vp, 32);
- mdio_read(dev, 24, MII_BMSR);
- for (phy = 0; phy < 32 && phy_idx < 1; phy++) {
- int mii_status, phyx;
-
- /*
- * For the 3c905CX we look at index 24 first, because it bogusly
- * reports an external PHY at all indices
- */
- if (phy == 0)
- phyx = 24;
- else if (phy <= 24)
- phyx = phy - 1;
- else
- phyx = phy;
- mii_status = mdio_read(dev, phyx, MII_BMSR);
- if (mii_status && mii_status != 0xffff) {
- vp->phys[phy_idx++] = phyx;
- if (print_info) {
- pr_info(" MII transceiver found at address %d, status %4x.\n",
- phyx, mii_status);
- }
- if ((mii_status & 0x0040) == 0)
- mii_preamble_required++;
- }
- }
- mii_preamble_required--;
- if (phy_idx == 0) {
- pr_warning(" ***WARNING*** No MII transceivers found!\n");
- vp->phys[0] = 24;
- } else {
- vp->advertising = mdio_read(dev, vp->phys[0], MII_ADVERTISE);
- if (vp->full_duplex) {
- /* Only advertise the FD media types. */
- vp->advertising &= ~0x02A0;
- mdio_write(dev, vp->phys[0], 4, vp->advertising);
- }
- }
- vp->mii.phy_id = vp->phys[0];
- }
-
- if (vp->capabilities & CapBusMaster) {
- vp->full_bus_master_tx = 1;
- if (print_info) {
- pr_info(" Enabling bus-master transmits and %s receives.\n",
- (vp->info2 & 1) ? "early" : "whole-frame" );
- }
- vp->full_bus_master_rx = (vp->info2 & 1) ? 1 : 2;
- vp->bus_master = 0; /* AKPM: vortex only */
- }
-
- /* The 3c59x-specific entries in the device structure. */
- if (vp->full_bus_master_tx) {
- dev->netdev_ops = &boomrang_netdev_ops;
- /* Actually, it still should work with iommu. */
- if (card_idx < MAX_UNITS &&
- ((hw_checksums[card_idx] == -1 && (vp->drv_flags & HAS_HWCKSM)) ||
- hw_checksums[card_idx] == 1)) {
- dev->features |= NETIF_F_IP_CSUM | NETIF_F_SG;
- }
- } else
- dev->netdev_ops = &vortex_netdev_ops;
-
- if (print_info) {
- pr_info("%s: scatter/gather %sabled. h/w checksums %sabled\n",
- print_name,
- (dev->features & NETIF_F_SG) ? "en":"dis",
- (dev->features & NETIF_F_IP_CSUM) ? "en":"dis");
- }
-
- dev->ethtool_ops = &vortex_ethtool_ops;
- dev->watchdog_timeo = (watchdog * HZ) / 1000;
-
- if (pdev) {
- vp->pm_state_valid = 1;
- pci_save_state(VORTEX_PCI(vp));
- acpi_set_WOL(dev);
- }
- retval = register_netdev(dev);
- if (retval == 0)
- return 0;
-
-free_ring:
- pci_free_consistent(pdev,
- sizeof(struct boom_rx_desc) * RX_RING_SIZE
- + sizeof(struct boom_tx_desc) * TX_RING_SIZE,
- vp->rx_ring,
- vp->rx_ring_dma);
-free_region:
- if (vp->must_free_region)
- release_region(dev->base_addr, vci->io_size);
- free_netdev(dev);
- pr_err(PFX "vortex_probe1 fails. Returns %d\n", retval);
-out:
- return retval;
-}
-
-static void
-issue_and_wait(struct net_device *dev, int cmd)
-{
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr = vp->ioaddr;
- int i;
-
- iowrite16(cmd, ioaddr + EL3_CMD);
- for (i = 0; i < 2000; i++) {
- if (!(ioread16(ioaddr + EL3_STATUS) & CmdInProgress))
- return;
- }
-
- /* OK, that didn't work. Do it the slow way. One second */
- for (i = 0; i < 100000; i++) {
- if (!(ioread16(ioaddr + EL3_STATUS) & CmdInProgress)) {
- if (vortex_debug > 1)
- pr_info("%s: command 0x%04x took %d usecs\n",
- dev->name, cmd, i * 10);
- return;
- }
- udelay(10);
- }
- pr_err("%s: command 0x%04x did not complete! Status=0x%x\n",
- dev->name, cmd, ioread16(ioaddr + EL3_STATUS));
-}
-
-static void
-vortex_set_duplex(struct net_device *dev)
-{
- struct vortex_private *vp = netdev_priv(dev);
-
- pr_info("%s: setting %s-duplex.\n",
- dev->name, (vp->full_duplex) ? "full" : "half");
-
- /* Set the full-duplex bit. */
- window_write16(vp,
- ((vp->info1 & 0x8000) || vp->full_duplex ? 0x20 : 0) |
- (vp->large_frames ? 0x40 : 0) |
- ((vp->full_duplex && vp->flow_ctrl && vp->partner_flow_ctrl) ?
- 0x100 : 0),
- 3, Wn3_MAC_Ctrl);
-}
-
-static void vortex_check_media(struct net_device *dev, unsigned int init)
-{
- struct vortex_private *vp = netdev_priv(dev);
- unsigned int ok_to_print = 0;
-
- if (vortex_debug > 3)
- ok_to_print = 1;
-
- if (mii_check_media(&vp->mii, ok_to_print, init)) {
- vp->full_duplex = vp->mii.full_duplex;
- vortex_set_duplex(dev);
- } else if (init) {
- vortex_set_duplex(dev);
- }
-}
-
-static int
-vortex_up(struct net_device *dev)
-{
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr = vp->ioaddr;
- unsigned int config;
- int i, mii_reg1, mii_reg5, err = 0;
-
- if (VORTEX_PCI(vp)) {
- pci_set_power_state(VORTEX_PCI(vp), PCI_D0); /* Go active */
- if (vp->pm_state_valid)
- pci_restore_state(VORTEX_PCI(vp));
- err = pci_enable_device(VORTEX_PCI(vp));
- if (err) {
- pr_warning("%s: Could not enable device\n",
- dev->name);
- goto err_out;
- }
- }
-
- /* Before initializing select the active media port. */
- config = window_read32(vp, 3, Wn3_Config);
-
- if (vp->media_override != 7) {
- pr_info("%s: Media override to transceiver %d (%s).\n",
- dev->name, vp->media_override,
- media_tbl[vp->media_override].name);
- dev->if_port = vp->media_override;
- } else if (vp->autoselect) {
- if (vp->has_nway) {
- if (vortex_debug > 1)
- pr_info("%s: using NWAY device table, not %d\n",
- dev->name, dev->if_port);
- dev->if_port = XCVR_NWAY;
- } else {
- /* Find first available media type, starting with 100baseTx. */
- dev->if_port = XCVR_100baseTx;
- while (! (vp->available_media & media_tbl[dev->if_port].mask))
- dev->if_port = media_tbl[dev->if_port].next;
- if (vortex_debug > 1)
- pr_info("%s: first available media type: %s\n",
- dev->name, media_tbl[dev->if_port].name);
- }
- } else {
- dev->if_port = vp->default_media;
- if (vortex_debug > 1)
- pr_info("%s: using default media %s\n",
- dev->name, media_tbl[dev->if_port].name);
- }
-
- init_timer(&vp->timer);
- vp->timer.expires = RUN_AT(media_tbl[dev->if_port].wait);
- vp->timer.data = (unsigned long)dev;
- vp->timer.function = vortex_timer; /* timer handler */
- add_timer(&vp->timer);
-
- init_timer(&vp->rx_oom_timer);
- vp->rx_oom_timer.data = (unsigned long)dev;
- vp->rx_oom_timer.function = rx_oom_timer;
-
- if (vortex_debug > 1)
- pr_debug("%s: Initial media type %s.\n",
- dev->name, media_tbl[dev->if_port].name);
-
- vp->full_duplex = vp->mii.force_media;
- config = BFINS(config, dev->if_port, 20, 4);
- if (vortex_debug > 6)
- pr_debug("vortex_up(): writing 0x%x to InternalConfig\n", config);
- window_write32(vp, config, 3, Wn3_Config);
-
- if (dev->if_port == XCVR_MII || dev->if_port == XCVR_NWAY) {
- mii_reg1 = mdio_read(dev, vp->phys[0], MII_BMSR);
- mii_reg5 = mdio_read(dev, vp->phys[0], MII_LPA);
- vp->partner_flow_ctrl = ((mii_reg5 & 0x0400) != 0);
- vp->mii.full_duplex = vp->full_duplex;
-
- vortex_check_media(dev, 1);
- }
- else
- vortex_set_duplex(dev);
-
- issue_and_wait(dev, TxReset);
- /*
- * Don't reset the PHY - that upsets autonegotiation during DHCP operations.
- */
- issue_and_wait(dev, RxReset|0x04);
-
-
- iowrite16(SetStatusEnb | 0x00, ioaddr + EL3_CMD);
-
- if (vortex_debug > 1) {
- pr_debug("%s: vortex_up() irq %d media status %4.4x.\n",
- dev->name, dev->irq, window_read16(vp, 4, Wn4_Media));
- }
-
- /* Set the station address and mask in window 2 each time opened. */
- for (i = 0; i < 6; i++)
- window_write8(vp, dev->dev_addr[i], 2, i);
- for (; i < 12; i+=2)
- window_write16(vp, 0, 2, i);
-
- if (vp->cb_fn_base) {
- unsigned short n = window_read16(vp, 2, Wn2_ResetOptions) & ~0x4010;
- if (vp->drv_flags & INVERT_LED_PWR)
- n |= 0x10;
- if (vp->drv_flags & INVERT_MII_PWR)
- n |= 0x4000;
- window_write16(vp, n, 2, Wn2_ResetOptions);
- }
-
- if (dev->if_port == XCVR_10base2)
- /* Start the thinnet transceiver. We should really wait 50ms...*/
- iowrite16(StartCoax, ioaddr + EL3_CMD);
- if (dev->if_port != XCVR_NWAY) {
- window_write16(vp,
- (window_read16(vp, 4, Wn4_Media) &
- ~(Media_10TP|Media_SQE)) |
- media_tbl[dev->if_port].media_bits,
- 4, Wn4_Media);
- }
-
- /* Switch to the stats window, and clear all stats by reading. */
- iowrite16(StatsDisable, ioaddr + EL3_CMD);
- for (i = 0; i < 10; i++)
- window_read8(vp, 6, i);
- window_read16(vp, 6, 10);
- window_read16(vp, 6, 12);
- /* New: On the Vortex we must also clear the BadSSD counter. */
- window_read8(vp, 4, 12);
- /* ..and on the Boomerang we enable the extra statistics bits. */
- window_write16(vp, 0x0040, 4, Wn4_NetDiag);
-
- if (vp->full_bus_master_rx) { /* Boomerang bus master. */
- vp->cur_rx = vp->dirty_rx = 0;
- /* Initialize the RxEarly register as recommended. */
- iowrite16(SetRxThreshold + (1536>>2), ioaddr + EL3_CMD);
- iowrite32(0x0020, ioaddr + PktStatus);
- iowrite32(vp->rx_ring_dma, ioaddr + UpListPtr);
- }
- if (vp->full_bus_master_tx) { /* Boomerang bus master Tx. */
- vp->cur_tx = vp->dirty_tx = 0;
- if (vp->drv_flags & IS_BOOMERANG)
- iowrite8(PKT_BUF_SZ>>8, ioaddr + TxFreeThreshold); /* Room for a packet. */
- /* Clear the Rx, Tx rings. */
- for (i = 0; i < RX_RING_SIZE; i++) /* AKPM: this is done in vortex_open, too */
- vp->rx_ring[i].status = 0;
- for (i = 0; i < TX_RING_SIZE; i++)
- vp->tx_skbuff[i] = NULL;
- iowrite32(0, ioaddr + DownListPtr);
- }
- /* Set receiver mode: presumably accept b-case and phys addr only. */
- set_rx_mode(dev);
- /* enable 802.1q tagged frames */
- set_8021q_mode(dev, 1);
- iowrite16(StatsEnable, ioaddr + EL3_CMD); /* Turn on statistics. */
-
- iowrite16(RxEnable, ioaddr + EL3_CMD); /* Enable the receiver. */
- iowrite16(TxEnable, ioaddr + EL3_CMD); /* Enable transmitter. */
- /* Allow status bits to be seen. */
- vp->status_enable = SetStatusEnb | HostError|IntReq|StatsFull|TxComplete|
- (vp->full_bus_master_tx ? DownComplete : TxAvailable) |
- (vp->full_bus_master_rx ? UpComplete : RxComplete) |
- (vp->bus_master ? DMADone : 0);
- vp->intr_enable = SetIntrEnb | IntLatch | TxAvailable |
- (vp->full_bus_master_rx ? 0 : RxComplete) |
- StatsFull | HostError | TxComplete | IntReq
- | (vp->bus_master ? DMADone : 0) | UpComplete | DownComplete;
- iowrite16(vp->status_enable, ioaddr + EL3_CMD);
- /* Ack all pending events, and set active indicator mask. */
- iowrite16(AckIntr | IntLatch | TxAvailable | RxEarly | IntReq,
- ioaddr + EL3_CMD);
- iowrite16(vp->intr_enable, ioaddr + EL3_CMD);
- if (vp->cb_fn_base) /* The PCMCIA people are idiots. */
- iowrite32(0x8000, vp->cb_fn_base + 4);
- netif_start_queue (dev);
-err_out:
- return err;
-}
-
-static int
-vortex_open(struct net_device *dev)
-{
- struct vortex_private *vp = netdev_priv(dev);
- int i;
- int retval;
-
- /* Use the now-standard shared IRQ implementation. */
- if ((retval = request_irq(dev->irq, vp->full_bus_master_rx ?
- boomerang_interrupt : vortex_interrupt, IRQF_SHARED, dev->name, dev))) {
- pr_err("%s: Could not reserve IRQ %d\n", dev->name, dev->irq);
- goto err;
- }
-
- if (vp->full_bus_master_rx) { /* Boomerang bus master. */
- if (vortex_debug > 2)
- pr_debug("%s: Filling in the Rx ring.\n", dev->name);
- for (i = 0; i < RX_RING_SIZE; i++) {
- struct sk_buff *skb;
- vp->rx_ring[i].next = cpu_to_le32(vp->rx_ring_dma + sizeof(struct boom_rx_desc) * (i+1));
- vp->rx_ring[i].status = 0; /* Clear complete bit. */
- vp->rx_ring[i].length = cpu_to_le32(PKT_BUF_SZ | LAST_FRAG);
-
- skb = __netdev_alloc_skb(dev, PKT_BUF_SZ + NET_IP_ALIGN,
- GFP_KERNEL);
- vp->rx_skbuff[i] = skb;
- if (skb == NULL)
- break; /* Bad news! */
-
- skb_reserve(skb, NET_IP_ALIGN); /* Align IP on 16 byte boundaries */
- vp->rx_ring[i].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, PKT_BUF_SZ, PCI_DMA_FROMDEVICE));
- }
- if (i != RX_RING_SIZE) {
- int j;
- pr_emerg("%s: no memory for rx ring\n", dev->name);
- for (j = 0; j < i; j++) {
- if (vp->rx_skbuff[j]) {
- dev_kfree_skb(vp->rx_skbuff[j]);
- vp->rx_skbuff[j] = NULL;
- }
- }
- retval = -ENOMEM;
- goto err_free_irq;
- }
- /* Wrap the ring. */
- vp->rx_ring[i-1].next = cpu_to_le32(vp->rx_ring_dma);
- }
-
- retval = vortex_up(dev);
- if (!retval)
- goto out;
-
-err_free_irq:
- free_irq(dev->irq, dev);
-err:
- if (vortex_debug > 1)
- pr_err("%s: vortex_open() fails: returning %d\n", dev->name, retval);
-out:
- return retval;
-}
-
-static void
-vortex_timer(unsigned long data)
-{
- struct net_device *dev = (struct net_device *)data;
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr = vp->ioaddr;
- int next_tick = 60*HZ;
- int ok = 0;
- int media_status;
-
- if (vortex_debug > 2) {
- pr_debug("%s: Media selection timer tick happened, %s.\n",
- dev->name, media_tbl[dev->if_port].name);
- pr_debug("dev->watchdog_timeo=%d\n", dev->watchdog_timeo);
- }
-
- media_status = window_read16(vp, 4, Wn4_Media);
- switch (dev->if_port) {
- case XCVR_10baseT: case XCVR_100baseTx: case XCVR_100baseFx:
- if (media_status & Media_LnkBeat) {
- netif_carrier_on(dev);
- ok = 1;
- if (vortex_debug > 1)
- pr_debug("%s: Media %s has link beat, %x.\n",
- dev->name, media_tbl[dev->if_port].name, media_status);
- } else {
- netif_carrier_off(dev);
- if (vortex_debug > 1) {
- pr_debug("%s: Media %s has no link beat, %x.\n",
- dev->name, media_tbl[dev->if_port].name, media_status);
- }
- }
- break;
- case XCVR_MII: case XCVR_NWAY:
- {
- ok = 1;
- vortex_check_media(dev, 0);
- }
- break;
- default: /* Other media types handled by Tx timeouts. */
- if (vortex_debug > 1)
- pr_debug("%s: Media %s has no indication, %x.\n",
- dev->name, media_tbl[dev->if_port].name, media_status);
- ok = 1;
- }
-
- if (!netif_carrier_ok(dev))
- next_tick = 5*HZ;
-
- if (vp->medialock)
- goto leave_media_alone;
-
- if (!ok) {
- unsigned int config;
-
- spin_lock_irq(&vp->lock);
-
- do {
- dev->if_port = media_tbl[dev->if_port].next;
- } while ( ! (vp->available_media & media_tbl[dev->if_port].mask));
- if (dev->if_port == XCVR_Default) { /* Go back to default. */
- dev->if_port = vp->default_media;
- if (vortex_debug > 1)
- pr_debug("%s: Media selection failing, using default %s port.\n",
- dev->name, media_tbl[dev->if_port].name);
- } else {
- if (vortex_debug > 1)
- pr_debug("%s: Media selection failed, now trying %s port.\n",
- dev->name, media_tbl[dev->if_port].name);
- next_tick = media_tbl[dev->if_port].wait;
- }
- window_write16(vp,
- (media_status & ~(Media_10TP|Media_SQE)) |
- media_tbl[dev->if_port].media_bits,
- 4, Wn4_Media);
-
- config = window_read32(vp, 3, Wn3_Config);
- config = BFINS(config, dev->if_port, 20, 4);
- window_write32(vp, config, 3, Wn3_Config);
-
- iowrite16(dev->if_port == XCVR_10base2 ? StartCoax : StopCoax,
- ioaddr + EL3_CMD);
- if (vortex_debug > 1)
- pr_debug("wrote 0x%08x to Wn3_Config\n", config);
- /* AKPM: FIXME: Should reset Rx & Tx here. P60 of 3c90xc.pdf */
-
- spin_unlock_irq(&vp->lock);
- }
-
-leave_media_alone:
- if (vortex_debug > 2)
- pr_debug("%s: Media selection timer finished, %s.\n",
- dev->name, media_tbl[dev->if_port].name);
-
- mod_timer(&vp->timer, RUN_AT(next_tick));
- if (vp->deferred)
- iowrite16(FakeIntr, ioaddr + EL3_CMD);
-}
-
-static void vortex_tx_timeout(struct net_device *dev)
-{
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr = vp->ioaddr;
-
- pr_err("%s: transmit timed out, tx_status %2.2x status %4.4x.\n",
- dev->name, ioread8(ioaddr + TxStatus),
- ioread16(ioaddr + EL3_STATUS));
- pr_err(" diagnostics: net %04x media %04x dma %08x fifo %04x\n",
- window_read16(vp, 4, Wn4_NetDiag),
- window_read16(vp, 4, Wn4_Media),
- ioread32(ioaddr + PktStatus),
- window_read16(vp, 4, Wn4_FIFODiag));
- /* Slight code bloat to be user friendly. */
- if ((ioread8(ioaddr + TxStatus) & 0x88) == 0x88)
- pr_err("%s: Transmitter encountered 16 collisions --"
- " network cable problem?\n", dev->name);
- if (ioread16(ioaddr + EL3_STATUS) & IntLatch) {
- pr_err("%s: Interrupt posted but not delivered --"
- " IRQ blocked by another device?\n", dev->name);
- /* Bad idea here.. but we might as well handle a few events. */
- {
- /*
- * Block interrupts because vortex_interrupt does a bare spin_lock()
- */
- unsigned long flags;
- local_irq_save(flags);
- if (vp->full_bus_master_tx)
- boomerang_interrupt(dev->irq, dev);
- else
- vortex_interrupt(dev->irq, dev);
- local_irq_restore(flags);
- }
- }
-
- if (vortex_debug > 0)
- dump_tx_ring(dev);
-
- issue_and_wait(dev, TxReset);
-
- dev->stats.tx_errors++;
- if (vp->full_bus_master_tx) {
- pr_debug("%s: Resetting the Tx ring pointer.\n", dev->name);
- if (vp->cur_tx - vp->dirty_tx > 0 && ioread32(ioaddr + DownListPtr) == 0)
- iowrite32(vp->tx_ring_dma + (vp->dirty_tx % TX_RING_SIZE) * sizeof(struct boom_tx_desc),
- ioaddr + DownListPtr);
- if (vp->cur_tx - vp->dirty_tx < TX_RING_SIZE)
- netif_wake_queue (dev);
- if (vp->drv_flags & IS_BOOMERANG)
- iowrite8(PKT_BUF_SZ>>8, ioaddr + TxFreeThreshold);
- iowrite16(DownUnstall, ioaddr + EL3_CMD);
- } else {
- dev->stats.tx_dropped++;
- netif_wake_queue(dev);
- }
-
- /* Issue Tx Enable */
- iowrite16(TxEnable, ioaddr + EL3_CMD);
- dev->trans_start = jiffies; /* prevent tx timeout */
-}
-
-/*
- * Handle uncommon interrupt sources. This is a separate routine to minimize
- * the cache impact.
- */
-static void
-vortex_error(struct net_device *dev, int status)
-{
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr = vp->ioaddr;
- int do_tx_reset = 0, reset_mask = 0;
- unsigned char tx_status = 0;
-
- if (vortex_debug > 2) {
- pr_err("%s: vortex_error(), status=0x%x\n", dev->name, status);
- }
-
- if (status & TxComplete) { /* Really "TxError" for us. */
- tx_status = ioread8(ioaddr + TxStatus);
- /* Presumably a tx-timeout. We must merely re-enable. */
- if (vortex_debug > 2 ||
- (tx_status != 0x88 && vortex_debug > 0)) {
- pr_err("%s: Transmit error, Tx status register %2.2x.\n",
- dev->name, tx_status);
- if (tx_status == 0x82) {
- pr_err("Probably a duplex mismatch. See "
- "Documentation/networking/vortex.txt\n");
- }
- dump_tx_ring(dev);
- }
- if (tx_status & 0x14) dev->stats.tx_fifo_errors++;
- if (tx_status & 0x38) dev->stats.tx_aborted_errors++;
- if (tx_status & 0x08) vp->xstats.tx_max_collisions++;
- iowrite8(0, ioaddr + TxStatus);
- if (tx_status & 0x30) { /* txJabber or txUnderrun */
- do_tx_reset = 1;
- } else if ((tx_status & 0x08) && (vp->drv_flags & MAX_COLLISION_RESET)) { /* maxCollisions */
- do_tx_reset = 1;
- reset_mask = 0x0108; /* Reset interface logic, but not download logic */
- } else { /* Merely re-enable the transmitter. */
- iowrite16(TxEnable, ioaddr + EL3_CMD);
- }
- }
-
- if (status & RxEarly) /* Rx early is unused. */
- iowrite16(AckIntr | RxEarly, ioaddr + EL3_CMD);
-
- if (status & StatsFull) { /* Empty statistics. */
- static int DoneDidThat;
- if (vortex_debug > 4)
- pr_debug("%s: Updating stats.\n", dev->name);
- update_stats(ioaddr, dev);
- /* HACK: Disable statistics as an interrupt source. */
- /* This occurs when we have the wrong media type! */
- if (DoneDidThat == 0 &&
- ioread16(ioaddr + EL3_STATUS) & StatsFull) {
- pr_warning("%s: Updating statistics failed, disabling "
- "stats as an interrupt source.\n", dev->name);
- iowrite16(SetIntrEnb |
- (window_read16(vp, 5, 10) & ~StatsFull),
- ioaddr + EL3_CMD);
- vp->intr_enable &= ~StatsFull;
- DoneDidThat++;
- }
- }
- if (status & IntReq) { /* Restore all interrupt sources. */
- iowrite16(vp->status_enable, ioaddr + EL3_CMD);
- iowrite16(vp->intr_enable, ioaddr + EL3_CMD);
- }
- if (status & HostError) {
- u16 fifo_diag;
- fifo_diag = window_read16(vp, 4, Wn4_FIFODiag);
- pr_err("%s: Host error, FIFO diagnostic register %4.4x.\n",
- dev->name, fifo_diag);
- /* Adapter failure requires Tx/Rx reset and reinit. */
- if (vp->full_bus_master_tx) {
- int bus_status = ioread32(ioaddr + PktStatus);
- /* 0x80000000 PCI master abort. */
- /* 0x40000000 PCI target abort. */
- if (vortex_debug)
- pr_err("%s: PCI bus error, bus status %8.8x\n", dev->name, bus_status);
-
- /* In this case, blow the card away */
- /* Must not enter D3 or we can't legally issue the reset! */
- vortex_down(dev, 0);
- issue_and_wait(dev, TotalReset | 0xff);
- vortex_up(dev); /* AKPM: bug. vortex_up() assumes that the rx ring is full. It may not be. */
- } else if (fifo_diag & 0x0400)
- do_tx_reset = 1;
- if (fifo_diag & 0x3000) {
- /* Reset Rx fifo and upload logic */
- issue_and_wait(dev, RxReset|0x07);
- /* Set the Rx filter to the current state. */
- set_rx_mode(dev);
- /* enable 802.1q VLAN tagged frames */
- set_8021q_mode(dev, 1);
- iowrite16(RxEnable, ioaddr + EL3_CMD); /* Re-enable the receiver. */
- iowrite16(AckIntr | HostError, ioaddr + EL3_CMD);
- }
- }
-
- if (do_tx_reset) {
- issue_and_wait(dev, TxReset|reset_mask);
- iowrite16(TxEnable, ioaddr + EL3_CMD);
- if (!vp->full_bus_master_tx)
- netif_wake_queue(dev);
- }
-}
-
-static netdev_tx_t
-vortex_start_xmit(struct sk_buff *skb, struct net_device *dev)
-{
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr = vp->ioaddr;
-
- /* Put out the doubleword header... */
- iowrite32(skb->len, ioaddr + TX_FIFO);
- if (vp->bus_master) {
- /* Set the bus-master controller to transfer the packet. */
- int len = (skb->len + 3) & ~3;
- vp->tx_skb_dma = pci_map_single(VORTEX_PCI(vp), skb->data, len,
- PCI_DMA_TODEVICE);
- spin_lock_irq(&vp->window_lock);
- window_set(vp, 7);
- iowrite32(vp->tx_skb_dma, ioaddr + Wn7_MasterAddr);
- iowrite16(len, ioaddr + Wn7_MasterLen);
- spin_unlock_irq(&vp->window_lock);
- vp->tx_skb = skb;
- iowrite16(StartDMADown, ioaddr + EL3_CMD);
- /* netif_wake_queue() will be called at the DMADone interrupt. */
- } else {
- /* ... and the packet rounded to a doubleword. */
- iowrite32_rep(ioaddr + TX_FIFO, skb->data, (skb->len + 3) >> 2);
- dev_kfree_skb (skb);
- if (ioread16(ioaddr + TxFree) > 1536) {
- netif_start_queue (dev); /* AKPM: redundant? */
- } else {
- /* Interrupt us when the FIFO has room for max-sized packet. */
- netif_stop_queue(dev);
- iowrite16(SetTxThreshold + (1536>>2), ioaddr + EL3_CMD);
- }
- }
-
-
- /* Clear the Tx status stack. */
- {
- int tx_status;
- int i = 32;
-
- while (--i > 0 && (tx_status = ioread8(ioaddr + TxStatus)) > 0) {
- if (tx_status & 0x3C) { /* A Tx-disabling error occurred. */
- if (vortex_debug > 2)
- pr_debug("%s: Tx error, status %2.2x.\n",
- dev->name, tx_status);
- if (tx_status & 0x04) dev->stats.tx_fifo_errors++;
- if (tx_status & 0x38) dev->stats.tx_aborted_errors++;
- if (tx_status & 0x30) {
- issue_and_wait(dev, TxReset);
- }
- iowrite16(TxEnable, ioaddr + EL3_CMD);
- }
- iowrite8(0x00, ioaddr + TxStatus); /* Pop the status stack. */
- }
- }
- return NETDEV_TX_OK;
-}
-
-static netdev_tx_t
-boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev)
-{
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr = vp->ioaddr;
- /* Calculate the next Tx descriptor entry. */
- int entry = vp->cur_tx % TX_RING_SIZE;
- struct boom_tx_desc *prev_entry = &vp->tx_ring[(vp->cur_tx-1) % TX_RING_SIZE];
- unsigned long flags;
-
- if (vortex_debug > 6) {
- pr_debug("boomerang_start_xmit()\n");
- pr_debug("%s: Trying to send a packet, Tx index %d.\n",
- dev->name, vp->cur_tx);
- }
-
- /*
- * We can't allow a recursion from our interrupt handler back into the
- * tx routine, as they take the same spin lock, and that causes
- * deadlock. Just return NETDEV_TX_BUSY and let the stack try again in
- * a bit
- */
- if (vp->handling_irq)
- return NETDEV_TX_BUSY;
-
- if (vp->cur_tx - vp->dirty_tx >= TX_RING_SIZE) {
- if (vortex_debug > 0)
- pr_warning("%s: BUG! Tx Ring full, refusing to send buffer.\n",
- dev->name);
- netif_stop_queue(dev);
- return NETDEV_TX_BUSY;
- }
-
- vp->tx_skbuff[entry] = skb;
-
- vp->tx_ring[entry].next = 0;
-#if DO_ZEROCOPY
- if (skb->ip_summed != CHECKSUM_PARTIAL)
- vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded);
- else
- vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded | AddTCPChksum | AddUDPChksum);
-
- if (!skb_shinfo(skb)->nr_frags) {
- vp->tx_ring[entry].frag[0].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data,
- skb->len, PCI_DMA_TODEVICE));
- vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb->len | LAST_FRAG);
- } else {
- int i;
-
- vp->tx_ring[entry].frag[0].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data,
- skb_headlen(skb), PCI_DMA_TODEVICE));
- vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb_headlen(skb));
-
- for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
- skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
-
- vp->tx_ring[entry].frag[i+1].addr =
- cpu_to_le32(pci_map_single(VORTEX_PCI(vp),
- (void*)page_address(frag->page) + frag->page_offset,
- frag->size, PCI_DMA_TODEVICE));
-
- if (i == skb_shinfo(skb)->nr_frags-1)
- vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(frag->size|LAST_FRAG);
- else
- vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(frag->size);
- }
- }
-#else
- vp->tx_ring[entry].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, skb->len, PCI_DMA_TODEVICE));
- vp->tx_ring[entry].length = cpu_to_le32(skb->len | LAST_FRAG);
- vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded);
-#endif
-
- spin_lock_irqsave(&vp->lock, flags);
- /* Wait for the stall to complete. */
- issue_and_wait(dev, DownStall);
- prev_entry->next = cpu_to_le32(vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc));
- if (ioread32(ioaddr + DownListPtr) == 0) {
- iowrite32(vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc), ioaddr + DownListPtr);
- vp->queued_packet++;
- }
-
- vp->cur_tx++;
- if (vp->cur_tx - vp->dirty_tx > TX_RING_SIZE - 1) {
- netif_stop_queue (dev);
- } else { /* Clear previous interrupt enable. */
-#if defined(tx_interrupt_mitigation)
- /* Dubious. If in boomeang_interrupt "faster" cyclone ifdef
- * were selected, this would corrupt DN_COMPLETE. No?
- */
- prev_entry->status &= cpu_to_le32(~TxIntrUploaded);
-#endif
- }
- iowrite16(DownUnstall, ioaddr + EL3_CMD);
- spin_unlock_irqrestore(&vp->lock, flags);
- return NETDEV_TX_OK;
-}
-
-/* The interrupt handler does all of the Rx thread work and cleans up
- after the Tx thread. */
-
-/*
- * This is the ISR for the vortex series chips.
- * full_bus_master_tx == 0 && full_bus_master_rx == 0
- */
-
-static irqreturn_t
-vortex_interrupt(int irq, void *dev_id)
-{
- struct net_device *dev = dev_id;
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr;
- int status;
- int work_done = max_interrupt_work;
- int handled = 0;
-
- ioaddr = vp->ioaddr;
- spin_lock(&vp->lock);
-
- status = ioread16(ioaddr + EL3_STATUS);
-
- if (vortex_debug > 6)
- pr_debug("vortex_interrupt(). status=0x%4x\n", status);
-
- if ((status & IntLatch) == 0)
- goto handler_exit; /* No interrupt: shared IRQs cause this */
- handled = 1;
-
- if (status & IntReq) {
- status |= vp->deferred;
- vp->deferred = 0;
- }
-
- if (status == 0xffff) /* h/w no longer present (hotplug)? */
- goto handler_exit;
-
- if (vortex_debug > 4)
- pr_debug("%s: interrupt, status %4.4x, latency %d ticks.\n",
- dev->name, status, ioread8(ioaddr + Timer));
-
- spin_lock(&vp->window_lock);
- window_set(vp, 7);
-
- do {
- if (vortex_debug > 5)
- pr_debug("%s: In interrupt loop, status %4.4x.\n",
- dev->name, status);
- if (status & RxComplete)
- vortex_rx(dev);
-
- if (status & TxAvailable) {
- if (vortex_debug > 5)
- pr_debug(" TX room bit was handled.\n");
- /* There's room in the FIFO for a full-sized packet. */
- iowrite16(AckIntr | TxAvailable, ioaddr + EL3_CMD);
- netif_wake_queue (dev);
- }
-
- if (status & DMADone) {
- if (ioread16(ioaddr + Wn7_MasterStatus) & 0x1000) {
- iowrite16(0x1000, ioaddr + Wn7_MasterStatus); /* Ack the event. */
- pci_unmap_single(VORTEX_PCI(vp), vp->tx_skb_dma, (vp->tx_skb->len + 3) & ~3, PCI_DMA_TODEVICE);
- dev_kfree_skb_irq(vp->tx_skb); /* Release the transferred buffer */
- if (ioread16(ioaddr + TxFree) > 1536) {
- /*
- * AKPM: FIXME: I don't think we need this. If the queue was stopped due to
- * insufficient FIFO room, the TxAvailable test will succeed and call
- * netif_wake_queue()
- */
- netif_wake_queue(dev);
- } else { /* Interrupt when FIFO has room for max-sized packet. */
- iowrite16(SetTxThreshold + (1536>>2), ioaddr + EL3_CMD);
- netif_stop_queue(dev);
- }
- }
- }
- /* Check for all uncommon interrupts at once. */
- if (status & (HostError | RxEarly | StatsFull | TxComplete | IntReq)) {
- if (status == 0xffff)
- break;
- if (status & RxEarly)
- vortex_rx(dev);
- spin_unlock(&vp->window_lock);
- vortex_error(dev, status);
- spin_lock(&vp->window_lock);
- window_set(vp, 7);
- }
-
- if (--work_done < 0) {
- pr_warning("%s: Too much work in interrupt, status %4.4x.\n",
- dev->name, status);
- /* Disable all pending interrupts. */
- do {
- vp->deferred |= status;
- iowrite16(SetStatusEnb | (~vp->deferred & vp->status_enable),
- ioaddr + EL3_CMD);
- iowrite16(AckIntr | (vp->deferred & 0x7ff), ioaddr + EL3_CMD);
- } while ((status = ioread16(ioaddr + EL3_CMD)) & IntLatch);
- /* The timer will reenable interrupts. */
- mod_timer(&vp->timer, jiffies + 1*HZ);
- break;
- }
- /* Acknowledge the IRQ. */
- iowrite16(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
- } while ((status = ioread16(ioaddr + EL3_STATUS)) & (IntLatch | RxComplete));
-
- spin_unlock(&vp->window_lock);
-
- if (vortex_debug > 4)
- pr_debug("%s: exiting interrupt, status %4.4x.\n",
- dev->name, status);
-handler_exit:
- spin_unlock(&vp->lock);
- return IRQ_RETVAL(handled);
-}
-
-/*
- * This is the ISR for the boomerang series chips.
- * full_bus_master_tx == 1 && full_bus_master_rx == 1
- */
-
-static irqreturn_t
-boomerang_interrupt(int irq, void *dev_id)
-{
- struct net_device *dev = dev_id;
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr;
- int status;
- int work_done = max_interrupt_work;
-
- ioaddr = vp->ioaddr;
-
-
- /*
- * It seems dopey to put the spinlock this early, but we could race against vortex_tx_timeout
- * and boomerang_start_xmit
- */
- spin_lock(&vp->lock);
- vp->handling_irq = 1;
-
- status = ioread16(ioaddr + EL3_STATUS);
-
- if (vortex_debug > 6)
- pr_debug("boomerang_interrupt. status=0x%4x\n", status);
-
- if ((status & IntLatch) == 0)
- goto handler_exit; /* No interrupt: shared IRQs can cause this */
-
- if (status == 0xffff) { /* h/w no longer present (hotplug)? */
- if (vortex_debug > 1)
- pr_debug("boomerang_interrupt(1): status = 0xffff\n");
- goto handler_exit;
- }
-
- if (status & IntReq) {
- status |= vp->deferred;
- vp->deferred = 0;
- }
-
- if (vortex_debug > 4)
- pr_debug("%s: interrupt, status %4.4x, latency %d ticks.\n",
- dev->name, status, ioread8(ioaddr + Timer));
- do {
- if (vortex_debug > 5)
- pr_debug("%s: In interrupt loop, status %4.4x.\n",
- dev->name, status);
- if (status & UpComplete) {
- iowrite16(AckIntr | UpComplete, ioaddr + EL3_CMD);
- if (vortex_debug > 5)
- pr_debug("boomerang_interrupt->boomerang_rx\n");
- boomerang_rx(dev);
- }
-
- if (status & DownComplete) {
- unsigned int dirty_tx = vp->dirty_tx;
-
- iowrite16(AckIntr | DownComplete, ioaddr + EL3_CMD);
- while (vp->cur_tx - dirty_tx > 0) {
- int entry = dirty_tx % TX_RING_SIZE;
-#if 1 /* AKPM: the latter is faster, but cyclone-only */
- if (ioread32(ioaddr + DownListPtr) ==
- vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc))
- break; /* It still hasn't been processed. */
-#else
- if ((vp->tx_ring[entry].status & DN_COMPLETE) == 0)
- break; /* It still hasn't been processed. */
-#endif
-
- if (vp->tx_skbuff[entry]) {
- struct sk_buff *skb = vp->tx_skbuff[entry];
-#if DO_ZEROCOPY
- int i;
- for (i=0; i<=skb_shinfo(skb)->nr_frags; i++)
- pci_unmap_single(VORTEX_PCI(vp),
- le32_to_cpu(vp->tx_ring[entry].frag[i].addr),
- le32_to_cpu(vp->tx_ring[entry].frag[i].length)&0xFFF,
- PCI_DMA_TODEVICE);
-#else
- pci_unmap_single(VORTEX_PCI(vp),
- le32_to_cpu(vp->tx_ring[entry].addr), skb->len, PCI_DMA_TODEVICE);
-#endif
- dev_kfree_skb_irq(skb);
- vp->tx_skbuff[entry] = NULL;
- } else {
- pr_debug("boomerang_interrupt: no skb!\n");
- }
- /* dev->stats.tx_packets++; Counted below. */
- dirty_tx++;
- }
- vp->dirty_tx = dirty_tx;
- if (vp->cur_tx - dirty_tx <= TX_RING_SIZE - 1) {
- if (vortex_debug > 6)
- pr_debug("boomerang_interrupt: wake queue\n");
- netif_wake_queue (dev);
- }
- }
-
- /* Check for all uncommon interrupts at once. */
- if (status & (HostError | RxEarly | StatsFull | TxComplete | IntReq))
- vortex_error(dev, status);
-
- if (--work_done < 0) {
- pr_warning("%s: Too much work in interrupt, status %4.4x.\n",
- dev->name, status);
- /* Disable all pending interrupts. */
- do {
- vp->deferred |= status;
- iowrite16(SetStatusEnb | (~vp->deferred & vp->status_enable),
- ioaddr + EL3_CMD);
- iowrite16(AckIntr | (vp->deferred & 0x7ff), ioaddr + EL3_CMD);
- } while ((status = ioread16(ioaddr + EL3_CMD)) & IntLatch);
- /* The timer will reenable interrupts. */
- mod_timer(&vp->timer, jiffies + 1*HZ);
- break;
- }
- /* Acknowledge the IRQ. */
- iowrite16(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
- if (vp->cb_fn_base) /* The PCMCIA people are idiots. */
- iowrite32(0x8000, vp->cb_fn_base + 4);
-
- } while ((status = ioread16(ioaddr + EL3_STATUS)) & IntLatch);
-
- if (vortex_debug > 4)
- pr_debug("%s: exiting interrupt, status %4.4x.\n",
- dev->name, status);
-handler_exit:
- vp->handling_irq = 0;
- spin_unlock(&vp->lock);
- return IRQ_HANDLED;
-}
-
-static int vortex_rx(struct net_device *dev)
-{
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr = vp->ioaddr;
- int i;
- short rx_status;
-
- if (vortex_debug > 5)
- pr_debug("vortex_rx(): status %4.4x, rx_status %4.4x.\n",
- ioread16(ioaddr+EL3_STATUS), ioread16(ioaddr+RxStatus));
- while ((rx_status = ioread16(ioaddr + RxStatus)) > 0) {
- if (rx_status & 0x4000) { /* Error, update stats. */
- unsigned char rx_error = ioread8(ioaddr + RxErrors);
- if (vortex_debug > 2)
- pr_debug(" Rx error: status %2.2x.\n", rx_error);
- dev->stats.rx_errors++;
- if (rx_error & 0x01) dev->stats.rx_over_errors++;
- if (rx_error & 0x02) dev->stats.rx_length_errors++;
- if (rx_error & 0x04) dev->stats.rx_frame_errors++;
- if (rx_error & 0x08) dev->stats.rx_crc_errors++;
- if (rx_error & 0x10) dev->stats.rx_length_errors++;
- } else {
- /* The packet length: up to 4.5K!. */
- int pkt_len = rx_status & 0x1fff;
- struct sk_buff *skb;
-
- skb = dev_alloc_skb(pkt_len + 5);
- if (vortex_debug > 4)
- pr_debug("Receiving packet size %d status %4.4x.\n",
- pkt_len, rx_status);
- if (skb != NULL) {
- skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
- /* 'skb_put()' points to the start of sk_buff data area. */
- if (vp->bus_master &&
- ! (ioread16(ioaddr + Wn7_MasterStatus) & 0x8000)) {
- dma_addr_t dma = pci_map_single(VORTEX_PCI(vp), skb_put(skb, pkt_len),
- pkt_len, PCI_DMA_FROMDEVICE);
- iowrite32(dma, ioaddr + Wn7_MasterAddr);
- iowrite16((skb->len + 3) & ~3, ioaddr + Wn7_MasterLen);
- iowrite16(StartDMAUp, ioaddr + EL3_CMD);
- while (ioread16(ioaddr + Wn7_MasterStatus) & 0x8000)
- ;
- pci_unmap_single(VORTEX_PCI(vp), dma, pkt_len, PCI_DMA_FROMDEVICE);
- } else {
- ioread32_rep(ioaddr + RX_FIFO,
- skb_put(skb, pkt_len),
- (pkt_len + 3) >> 2);
- }
- iowrite16(RxDiscard, ioaddr + EL3_CMD); /* Pop top Rx packet. */
- skb->protocol = eth_type_trans(skb, dev);
- netif_rx(skb);
- dev->stats.rx_packets++;
- /* Wait a limited time to go to next packet. */
- for (i = 200; i >= 0; i--)
- if ( ! (ioread16(ioaddr + EL3_STATUS) & CmdInProgress))
- break;
- continue;
- } else if (vortex_debug > 0)
- pr_notice("%s: No memory to allocate a sk_buff of size %d.\n",
- dev->name, pkt_len);
- dev->stats.rx_dropped++;
- }
- issue_and_wait(dev, RxDiscard);
- }
-
- return 0;
-}
-
-static int
-boomerang_rx(struct net_device *dev)
-{
- struct vortex_private *vp = netdev_priv(dev);
- int entry = vp->cur_rx % RX_RING_SIZE;
- void __iomem *ioaddr = vp->ioaddr;
- int rx_status;
- int rx_work_limit = vp->dirty_rx + RX_RING_SIZE - vp->cur_rx;
-
- if (vortex_debug > 5)
- pr_debug("boomerang_rx(): status %4.4x\n", ioread16(ioaddr+EL3_STATUS));
-
- while ((rx_status = le32_to_cpu(vp->rx_ring[entry].status)) & RxDComplete){
- if (--rx_work_limit < 0)
- break;
- if (rx_status & RxDError) { /* Error, update stats. */
- unsigned char rx_error = rx_status >> 16;
- if (vortex_debug > 2)
- pr_debug(" Rx error: status %2.2x.\n", rx_error);
- dev->stats.rx_errors++;
- if (rx_error & 0x01) dev->stats.rx_over_errors++;
- if (rx_error & 0x02) dev->stats.rx_length_errors++;
- if (rx_error & 0x04) dev->stats.rx_frame_errors++;
- if (rx_error & 0x08) dev->stats.rx_crc_errors++;
- if (rx_error & 0x10) dev->stats.rx_length_errors++;
- } else {
- /* The packet length: up to 4.5K!. */
- int pkt_len = rx_status & 0x1fff;
- struct sk_buff *skb;
- dma_addr_t dma = le32_to_cpu(vp->rx_ring[entry].addr);
-
- if (vortex_debug > 4)
- pr_debug("Receiving packet size %d status %4.4x.\n",
- pkt_len, rx_status);
-
- /* Check if the packet is long enough to just accept without
- copying to a properly sized skbuff. */
- if (pkt_len < rx_copybreak && (skb = dev_alloc_skb(pkt_len + 2)) != NULL) {
- skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
- pci_dma_sync_single_for_cpu(VORTEX_PCI(vp), dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
- /* 'skb_put()' points to the start of sk_buff data area. */
- memcpy(skb_put(skb, pkt_len),
- vp->rx_skbuff[entry]->data,
- pkt_len);
- pci_dma_sync_single_for_device(VORTEX_PCI(vp), dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
- vp->rx_copy++;
- } else {
- /* Pass up the skbuff already on the Rx ring. */
- skb = vp->rx_skbuff[entry];
- vp->rx_skbuff[entry] = NULL;
- skb_put(skb, pkt_len);
- pci_unmap_single(VORTEX_PCI(vp), dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
- vp->rx_nocopy++;
- }
- skb->protocol = eth_type_trans(skb, dev);
- { /* Use hardware checksum info. */
- int csum_bits = rx_status & 0xee000000;
- if (csum_bits &&
- (csum_bits == (IPChksumValid | TCPChksumValid) ||
- csum_bits == (IPChksumValid | UDPChksumValid))) {
- skb->ip_summed = CHECKSUM_UNNECESSARY;
- vp->rx_csumhits++;
- }
- }
- netif_rx(skb);
- dev->stats.rx_packets++;
- }
- entry = (++vp->cur_rx) % RX_RING_SIZE;
- }
- /* Refill the Rx ring buffers. */
- for (; vp->cur_rx - vp->dirty_rx > 0; vp->dirty_rx++) {
- struct sk_buff *skb;
- entry = vp->dirty_rx % RX_RING_SIZE;
- if (vp->rx_skbuff[entry] == NULL) {
- skb = netdev_alloc_skb_ip_align(dev, PKT_BUF_SZ);
- if (skb == NULL) {
- static unsigned long last_jif;
- if (time_after(jiffies, last_jif + 10 * HZ)) {
- pr_warning("%s: memory shortage\n", dev->name);
- last_jif = jiffies;
- }
- if ((vp->cur_rx - vp->dirty_rx) == RX_RING_SIZE)
- mod_timer(&vp->rx_oom_timer, RUN_AT(HZ * 1));
- break; /* Bad news! */
- }
-
- vp->rx_ring[entry].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, PKT_BUF_SZ, PCI_DMA_FROMDEVICE));
- vp->rx_skbuff[entry] = skb;
- }
- vp->rx_ring[entry].status = 0; /* Clear complete bit. */
- iowrite16(UpUnstall, ioaddr + EL3_CMD);
- }
- return 0;
-}
-
-/*
- * If we've hit a total OOM refilling the Rx ring we poll once a second
- * for some memory. Otherwise there is no way to restart the rx process.
- */
-static void
-rx_oom_timer(unsigned long arg)
-{
- struct net_device *dev = (struct net_device *)arg;
- struct vortex_private *vp = netdev_priv(dev);
-
- spin_lock_irq(&vp->lock);
- if ((vp->cur_rx - vp->dirty_rx) == RX_RING_SIZE) /* This test is redundant, but makes me feel good */
- boomerang_rx(dev);
- if (vortex_debug > 1) {
- pr_debug("%s: rx_oom_timer %s\n", dev->name,
- ((vp->cur_rx - vp->dirty_rx) != RX_RING_SIZE) ? "succeeded" : "retrying");
- }
- spin_unlock_irq(&vp->lock);
-}
-
-static void
-vortex_down(struct net_device *dev, int final_down)
-{
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr = vp->ioaddr;
-
- netif_stop_queue (dev);
-
- del_timer_sync(&vp->rx_oom_timer);
- del_timer_sync(&vp->timer);
-
- /* Turn off statistics ASAP. We update dev->stats below. */
- iowrite16(StatsDisable, ioaddr + EL3_CMD);
-
- /* Disable the receiver and transmitter. */
- iowrite16(RxDisable, ioaddr + EL3_CMD);
- iowrite16(TxDisable, ioaddr + EL3_CMD);
-
- /* Disable receiving 802.1q tagged frames */
- set_8021q_mode(dev, 0);
-
- if (dev->if_port == XCVR_10base2)
- /* Turn off thinnet power. Green! */
- iowrite16(StopCoax, ioaddr + EL3_CMD);
-
- iowrite16(SetIntrEnb | 0x0000, ioaddr + EL3_CMD);
-
- update_stats(ioaddr, dev);
- if (vp->full_bus_master_rx)
- iowrite32(0, ioaddr + UpListPtr);
- if (vp->full_bus_master_tx)
- iowrite32(0, ioaddr + DownListPtr);
-
- if (final_down && VORTEX_PCI(vp)) {
- vp->pm_state_valid = 1;
- pci_save_state(VORTEX_PCI(vp));
- acpi_set_WOL(dev);
- }
-}
-
-static int
-vortex_close(struct net_device *dev)
-{
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr = vp->ioaddr;
- int i;
-
- if (netif_device_present(dev))
- vortex_down(dev, 1);
-
- if (vortex_debug > 1) {
- pr_debug("%s: vortex_close() status %4.4x, Tx status %2.2x.\n",
- dev->name, ioread16(ioaddr + EL3_STATUS), ioread8(ioaddr + TxStatus));
- pr_debug("%s: vortex close stats: rx_nocopy %d rx_copy %d"
- " tx_queued %d Rx pre-checksummed %d.\n",
- dev->name, vp->rx_nocopy, vp->rx_copy, vp->queued_packet, vp->rx_csumhits);
- }
-
-#if DO_ZEROCOPY
- if (vp->rx_csumhits &&
- (vp->drv_flags & HAS_HWCKSM) == 0 &&
- (vp->card_idx >= MAX_UNITS || hw_checksums[vp->card_idx] == -1)) {
- pr_warning("%s supports hardware checksums, and we're not using them!\n", dev->name);
- }
-#endif
-
- free_irq(dev->irq, dev);
-
- if (vp->full_bus_master_rx) { /* Free Boomerang bus master Rx buffers. */
- for (i = 0; i < RX_RING_SIZE; i++)
- if (vp->rx_skbuff[i]) {
- pci_unmap_single( VORTEX_PCI(vp), le32_to_cpu(vp->rx_ring[i].addr),
- PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
- dev_kfree_skb(vp->rx_skbuff[i]);
- vp->rx_skbuff[i] = NULL;
- }
- }
- if (vp->full_bus_master_tx) { /* Free Boomerang bus master Tx buffers. */
- for (i = 0; i < TX_RING_SIZE; i++) {
- if (vp->tx_skbuff[i]) {
- struct sk_buff *skb = vp->tx_skbuff[i];
-#if DO_ZEROCOPY
- int k;
-
- for (k=0; k<=skb_shinfo(skb)->nr_frags; k++)
- pci_unmap_single(VORTEX_PCI(vp),
- le32_to_cpu(vp->tx_ring[i].frag[k].addr),
- le32_to_cpu(vp->tx_ring[i].frag[k].length)&0xFFF,
- PCI_DMA_TODEVICE);
-#else
- pci_unmap_single(VORTEX_PCI(vp), le32_to_cpu(vp->tx_ring[i].addr), skb->len, PCI_DMA_TODEVICE);
-#endif
- dev_kfree_skb(skb);
- vp->tx_skbuff[i] = NULL;
- }
- }
- }
-
- return 0;
-}
-
-static void
-dump_tx_ring(struct net_device *dev)
-{
- if (vortex_debug > 0) {
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr = vp->ioaddr;
-
- if (vp->full_bus_master_tx) {
- int i;
- int stalled = ioread32(ioaddr + PktStatus) & 0x04; /* Possible racy. But it's only debug stuff */
-
- pr_err(" Flags; bus-master %d, dirty %d(%d) current %d(%d)\n",
- vp->full_bus_master_tx,
- vp->dirty_tx, vp->dirty_tx % TX_RING_SIZE,
- vp->cur_tx, vp->cur_tx % TX_RING_SIZE);
- pr_err(" Transmit list %8.8x vs. %p.\n",
- ioread32(ioaddr + DownListPtr),
- &vp->tx_ring[vp->dirty_tx % TX_RING_SIZE]);
- issue_and_wait(dev, DownStall);
- for (i = 0; i < TX_RING_SIZE; i++) {
- unsigned int length;
-
-#if DO_ZEROCOPY
- length = le32_to_cpu(vp->tx_ring[i].frag[0].length);
-#else
- length = le32_to_cpu(vp->tx_ring[i].length);
-#endif
- pr_err(" %d: @%p length %8.8x status %8.8x\n",
- i, &vp->tx_ring[i], length,
- le32_to_cpu(vp->tx_ring[i].status));
- }
- if (!stalled)
- iowrite16(DownUnstall, ioaddr + EL3_CMD);
- }
- }
-}
-
-static struct net_device_stats *vortex_get_stats(struct net_device *dev)
-{
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr = vp->ioaddr;
- unsigned long flags;
-
- if (netif_device_present(dev)) { /* AKPM: Used to be netif_running */
- spin_lock_irqsave (&vp->lock, flags);
- update_stats(ioaddr, dev);
- spin_unlock_irqrestore (&vp->lock, flags);
- }
- return &dev->stats;
-}
-
-/* Update statistics.
- Unlike with the EL3 we need not worry about interrupts changing
- the window setting from underneath us, but we must still guard
- against a race condition with a StatsUpdate interrupt updating the
- table. This is done by checking that the ASM (!) code generated uses
- atomic updates with '+='.
- */
-static void update_stats(void __iomem *ioaddr, struct net_device *dev)
-{
- struct vortex_private *vp = netdev_priv(dev);
-
- /* Unlike the 3c5x9 we need not turn off stats updates while reading. */
- /* Switch to the stats window, and read everything. */
- dev->stats.tx_carrier_errors += window_read8(vp, 6, 0);
- dev->stats.tx_heartbeat_errors += window_read8(vp, 6, 1);
- dev->stats.tx_window_errors += window_read8(vp, 6, 4);
- dev->stats.rx_fifo_errors += window_read8(vp, 6, 5);
- dev->stats.tx_packets += window_read8(vp, 6, 6);
- dev->stats.tx_packets += (window_read8(vp, 6, 9) &
- 0x30) << 4;
- /* Rx packets */ window_read8(vp, 6, 7); /* Must read to clear */
- /* Don't bother with register 9, an extension of registers 6&7.
- If we do use the 6&7 values the atomic update assumption above
- is invalid. */
- dev->stats.rx_bytes += window_read16(vp, 6, 10);
- dev->stats.tx_bytes += window_read16(vp, 6, 12);
- /* Extra stats for get_ethtool_stats() */
- vp->xstats.tx_multiple_collisions += window_read8(vp, 6, 2);
- vp->xstats.tx_single_collisions += window_read8(vp, 6, 3);
- vp->xstats.tx_deferred += window_read8(vp, 6, 8);
- vp->xstats.rx_bad_ssd += window_read8(vp, 4, 12);
-
- dev->stats.collisions = vp->xstats.tx_multiple_collisions
- + vp->xstats.tx_single_collisions
- + vp->xstats.tx_max_collisions;
-
- {
- u8 up = window_read8(vp, 4, 13);
- dev->stats.rx_bytes += (up & 0x0f) << 16;
- dev->stats.tx_bytes += (up & 0xf0) << 12;
- }
-}
-
-static int vortex_nway_reset(struct net_device *dev)
-{
- struct vortex_private *vp = netdev_priv(dev);
-
- return mii_nway_restart(&vp->mii);
-}
-
-static int vortex_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
- struct vortex_private *vp = netdev_priv(dev);
-
- return mii_ethtool_gset(&vp->mii, cmd);
-}
-
-static int vortex_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
- struct vortex_private *vp = netdev_priv(dev);
-
- return mii_ethtool_sset(&vp->mii, cmd);
-}
-
-static u32 vortex_get_msglevel(struct net_device *dev)
-{
- return vortex_debug;
-}
-
-static void vortex_set_msglevel(struct net_device *dev, u32 dbg)
-{
- vortex_debug = dbg;
-}
-
-static int vortex_get_sset_count(struct net_device *dev, int sset)
-{
- switch (sset) {
- case ETH_SS_STATS:
- return VORTEX_NUM_STATS;
- default:
- return -EOPNOTSUPP;
- }
-}
-
-static void vortex_get_ethtool_stats(struct net_device *dev,
- struct ethtool_stats *stats, u64 *data)
-{
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr = vp->ioaddr;
- unsigned long flags;
-
- spin_lock_irqsave(&vp->lock, flags);
- update_stats(ioaddr, dev);
- spin_unlock_irqrestore(&vp->lock, flags);
-
- data[0] = vp->xstats.tx_deferred;
- data[1] = vp->xstats.tx_max_collisions;
- data[2] = vp->xstats.tx_multiple_collisions;
- data[3] = vp->xstats.tx_single_collisions;
- data[4] = vp->xstats.rx_bad_ssd;
-}
-
-
-static void vortex_get_strings(struct net_device *dev, u32 stringset, u8 *data)
-{
- switch (stringset) {
- case ETH_SS_STATS:
- memcpy(data, ðtool_stats_keys, sizeof(ethtool_stats_keys));
- break;
- default:
- WARN_ON(1);
- break;
- }
-}
-
-static void vortex_get_drvinfo(struct net_device *dev,
- struct ethtool_drvinfo *info)
-{
- struct vortex_private *vp = netdev_priv(dev);
-
- strcpy(info->driver, DRV_NAME);
- if (VORTEX_PCI(vp)) {
- strcpy(info->bus_info, pci_name(VORTEX_PCI(vp)));
- } else {
- if (VORTEX_EISA(vp))
- strcpy(info->bus_info, dev_name(vp->gendev));
- else
- sprintf(info->bus_info, "EISA 0x%lx %d",
- dev->base_addr, dev->irq);
- }
-}
-
-static void vortex_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
-{
- struct vortex_private *vp = netdev_priv(dev);
-
- if (!VORTEX_PCI(vp))
- return;
-
- wol->supported = WAKE_MAGIC;
-
- wol->wolopts = 0;
- if (vp->enable_wol)
- wol->wolopts |= WAKE_MAGIC;
-}
-
-static int vortex_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
-{
- struct vortex_private *vp = netdev_priv(dev);
-
- if (!VORTEX_PCI(vp))
- return -EOPNOTSUPP;
-
- if (wol->wolopts & ~WAKE_MAGIC)
- return -EINVAL;
-
- if (wol->wolopts & WAKE_MAGIC)
- vp->enable_wol = 1;
- else
- vp->enable_wol = 0;
- acpi_set_WOL(dev);
-
- return 0;
-}
-
-static const struct ethtool_ops vortex_ethtool_ops = {
- .get_drvinfo = vortex_get_drvinfo,
- .get_strings = vortex_get_strings,
- .get_msglevel = vortex_get_msglevel,
- .set_msglevel = vortex_set_msglevel,
- .get_ethtool_stats = vortex_get_ethtool_stats,
- .get_sset_count = vortex_get_sset_count,
- .get_settings = vortex_get_settings,
- .set_settings = vortex_set_settings,
- .get_link = ethtool_op_get_link,
- .nway_reset = vortex_nway_reset,
- .get_wol = vortex_get_wol,
- .set_wol = vortex_set_wol,
-};
-
-#ifdef CONFIG_PCI
-/*
- * Must power the device up to do MDIO operations
- */
-static int vortex_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
-{
- int err;
- struct vortex_private *vp = netdev_priv(dev);
- pci_power_t state = 0;
-
- if(VORTEX_PCI(vp))
- state = VORTEX_PCI(vp)->current_state;
-
- /* The kernel core really should have pci_get_power_state() */
-
- if(state != 0)
- pci_set_power_state(VORTEX_PCI(vp), PCI_D0);
- err = generic_mii_ioctl(&vp->mii, if_mii(rq), cmd, NULL);
- if(state != 0)
- pci_set_power_state(VORTEX_PCI(vp), state);
-
- return err;
-}
-#endif
-
-
-/* Pre-Cyclone chips have no documented multicast filter, so the only
- multicast setting is to receive all multicast frames. At least
- the chip has a very clean way to set the mode, unlike many others. */
-static void set_rx_mode(struct net_device *dev)
-{
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr = vp->ioaddr;
- int new_mode;
-
- if (dev->flags & IFF_PROMISC) {
- if (vortex_debug > 3)
- pr_notice("%s: Setting promiscuous mode.\n", dev->name);
- new_mode = SetRxFilter|RxStation|RxMulticast|RxBroadcast|RxProm;
- } else if (!netdev_mc_empty(dev) || dev->flags & IFF_ALLMULTI) {
- new_mode = SetRxFilter|RxStation|RxMulticast|RxBroadcast;
- } else
- new_mode = SetRxFilter | RxStation | RxBroadcast;
-
- iowrite16(new_mode, ioaddr + EL3_CMD);
-}
-
-#if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)
-/* Setup the card so that it can receive frames with an 802.1q VLAN tag.
- Note that this must be done after each RxReset due to some backwards
- compatibility logic in the Cyclone and Tornado ASICs */
-
-/* The Ethernet Type used for 802.1q tagged frames */
-#define VLAN_ETHER_TYPE 0x8100
-
-static void set_8021q_mode(struct net_device *dev, int enable)
-{
- struct vortex_private *vp = netdev_priv(dev);
- int mac_ctrl;
-
- if ((vp->drv_flags&IS_CYCLONE) || (vp->drv_flags&IS_TORNADO)) {
- /* cyclone and tornado chipsets can recognize 802.1q
- * tagged frames and treat them correctly */
-
- int max_pkt_size = dev->mtu+14; /* MTU+Ethernet header */
- if (enable)
- max_pkt_size += 4; /* 802.1Q VLAN tag */
-
- window_write16(vp, max_pkt_size, 3, Wn3_MaxPktSize);
-
- /* set VlanEtherType to let the hardware checksumming
- treat tagged frames correctly */
- window_write16(vp, VLAN_ETHER_TYPE, 7, Wn7_VlanEtherType);
- } else {
- /* on older cards we have to enable large frames */
-
- vp->large_frames = dev->mtu > 1500 || enable;
-
- mac_ctrl = window_read16(vp, 3, Wn3_MAC_Ctrl);
- if (vp->large_frames)
- mac_ctrl |= 0x40;
- else
- mac_ctrl &= ~0x40;
- window_write16(vp, mac_ctrl, 3, Wn3_MAC_Ctrl);
- }
-}
-#else
-
-static void set_8021q_mode(struct net_device *dev, int enable)
-{
-}
-
-
-#endif
-
-/* MII transceiver control section.
- Read and write the MII registers using software-generated serial
- MDIO protocol. See the MII specifications or DP83840A data sheet
- for details. */
-
-/* The maximum data clock rate is 2.5 Mhz. The minimum timing is usually
- met by back-to-back PCI I/O cycles, but we insert a delay to avoid
- "overclocking" issues. */
-static void mdio_delay(struct vortex_private *vp)
-{
- window_read32(vp, 4, Wn4_PhysicalMgmt);
-}
-
-#define MDIO_SHIFT_CLK 0x01
-#define MDIO_DIR_WRITE 0x04
-#define MDIO_DATA_WRITE0 (0x00 | MDIO_DIR_WRITE)
-#define MDIO_DATA_WRITE1 (0x02 | MDIO_DIR_WRITE)
-#define MDIO_DATA_READ 0x02
-#define MDIO_ENB_IN 0x00
-
-/* Generate the preamble required for initial synchronization and
- a few older transceivers. */
-static void mdio_sync(struct vortex_private *vp, int bits)
-{
- /* Establish sync by sending at least 32 logic ones. */
- while (-- bits >= 0) {
- window_write16(vp, MDIO_DATA_WRITE1, 4, Wn4_PhysicalMgmt);
- mdio_delay(vp);
- window_write16(vp, MDIO_DATA_WRITE1 | MDIO_SHIFT_CLK,
- 4, Wn4_PhysicalMgmt);
- mdio_delay(vp);
- }
-}
-
-static int mdio_read(struct net_device *dev, int phy_id, int location)
-{
- int i;
- struct vortex_private *vp = netdev_priv(dev);
- int read_cmd = (0xf6 << 10) | (phy_id << 5) | location;
- unsigned int retval = 0;
-
- spin_lock_bh(&vp->mii_lock);
-
- if (mii_preamble_required)
- mdio_sync(vp, 32);
-
- /* Shift the read command bits out. */
- for (i = 14; i >= 0; i--) {
- int dataval = (read_cmd&(1<<i)) ? MDIO_DATA_WRITE1 : MDIO_DATA_WRITE0;
- window_write16(vp, dataval, 4, Wn4_PhysicalMgmt);
- mdio_delay(vp);
- window_write16(vp, dataval | MDIO_SHIFT_CLK,
- 4, Wn4_PhysicalMgmt);
- mdio_delay(vp);
- }
- /* Read the two transition, 16 data, and wire-idle bits. */
- for (i = 19; i > 0; i--) {
- window_write16(vp, MDIO_ENB_IN, 4, Wn4_PhysicalMgmt);
- mdio_delay(vp);
- retval = (retval << 1) |
- ((window_read16(vp, 4, Wn4_PhysicalMgmt) &
- MDIO_DATA_READ) ? 1 : 0);
- window_write16(vp, MDIO_ENB_IN | MDIO_SHIFT_CLK,
- 4, Wn4_PhysicalMgmt);
- mdio_delay(vp);
- }
-
- spin_unlock_bh(&vp->mii_lock);
-
- return retval & 0x20000 ? 0xffff : retval>>1 & 0xffff;
-}
-
-static void mdio_write(struct net_device *dev, int phy_id, int location, int value)
-{
- struct vortex_private *vp = netdev_priv(dev);
- int write_cmd = 0x50020000 | (phy_id << 23) | (location << 18) | value;
- int i;
-
- spin_lock_bh(&vp->mii_lock);
-
- if (mii_preamble_required)
- mdio_sync(vp, 32);
-
- /* Shift the command bits out. */
- for (i = 31; i >= 0; i--) {
- int dataval = (write_cmd&(1<<i)) ? MDIO_DATA_WRITE1 : MDIO_DATA_WRITE0;
- window_write16(vp, dataval, 4, Wn4_PhysicalMgmt);
- mdio_delay(vp);
- window_write16(vp, dataval | MDIO_SHIFT_CLK,
- 4, Wn4_PhysicalMgmt);
- mdio_delay(vp);
- }
- /* Leave the interface idle. */
- for (i = 1; i >= 0; i--) {
- window_write16(vp, MDIO_ENB_IN, 4, Wn4_PhysicalMgmt);
- mdio_delay(vp);
- window_write16(vp, MDIO_ENB_IN | MDIO_SHIFT_CLK,
- 4, Wn4_PhysicalMgmt);
- mdio_delay(vp);
- }
-
- spin_unlock_bh(&vp->mii_lock);
-}
-
-/* ACPI: Advanced Configuration and Power Interface. */
-/* Set Wake-On-LAN mode and put the board into D3 (power-down) state. */
-static void acpi_set_WOL(struct net_device *dev)
-{
- struct vortex_private *vp = netdev_priv(dev);
- void __iomem *ioaddr = vp->ioaddr;
-
- device_set_wakeup_enable(vp->gendev, vp->enable_wol);
-
- if (vp->enable_wol) {
- /* Power up on: 1==Downloaded Filter, 2==Magic Packets, 4==Link Status. */
- window_write16(vp, 2, 7, 0x0c);
- /* The RxFilter must accept the WOL frames. */
- iowrite16(SetRxFilter|RxStation|RxMulticast|RxBroadcast, ioaddr + EL3_CMD);
- iowrite16(RxEnable, ioaddr + EL3_CMD);
-
- if (pci_enable_wake(VORTEX_PCI(vp), PCI_D3hot, 1)) {
- pr_info("%s: WOL not supported.\n", pci_name(VORTEX_PCI(vp)));
-
- vp->enable_wol = 0;
- return;
- }
-
- if (VORTEX_PCI(vp)->current_state < PCI_D3hot)
- return;
-
- /* Change the power state to D3; RxEnable doesn't take effect. */
- pci_set_power_state(VORTEX_PCI(vp), PCI_D3hot);
- }
-}
-
-
-static void __devexit vortex_remove_one(struct pci_dev *pdev)
-{
- struct net_device *dev = pci_get_drvdata(pdev);
- struct vortex_private *vp;
-
- if (!dev) {
- pr_err("vortex_remove_one called for Compaq device!\n");
- BUG();
- }
-
- vp = netdev_priv(dev);
-
- if (vp->cb_fn_base)
- pci_iounmap(VORTEX_PCI(vp), vp->cb_fn_base);
-
- unregister_netdev(dev);
-
- if (VORTEX_PCI(vp)) {
- pci_set_power_state(VORTEX_PCI(vp), PCI_D0); /* Go active */
- if (vp->pm_state_valid)
- pci_restore_state(VORTEX_PCI(vp));
- pci_disable_device(VORTEX_PCI(vp));
- }
- /* Should really use issue_and_wait() here */
- iowrite16(TotalReset | ((vp->drv_flags & EEPROM_RESET) ? 0x04 : 0x14),
- vp->ioaddr + EL3_CMD);
-
- pci_iounmap(VORTEX_PCI(vp), vp->ioaddr);
-
- pci_free_consistent(pdev,
- sizeof(struct boom_rx_desc) * RX_RING_SIZE
- + sizeof(struct boom_tx_desc) * TX_RING_SIZE,
- vp->rx_ring,
- vp->rx_ring_dma);
- if (vp->must_free_region)
- release_region(dev->base_addr, vp->io_size);
- free_netdev(dev);
-}
-
-
-static struct pci_driver vortex_driver = {
- .name = "3c59x",
- .probe = vortex_init_one,
- .remove = __devexit_p(vortex_remove_one),
- .id_table = vortex_pci_tbl,
- .driver.pm = VORTEX_PM_OPS,
-};
-
-
-static int vortex_have_pci;
-static int vortex_have_eisa;
-
-
-static int __init vortex_init(void)
-{
- int pci_rc, eisa_rc;
-
- pci_rc = pci_register_driver(&vortex_driver);
- eisa_rc = vortex_eisa_init();
-
- if (pci_rc == 0)
- vortex_have_pci = 1;
- if (eisa_rc > 0)
- vortex_have_eisa = 1;
-
- return (vortex_have_pci + vortex_have_eisa) ? 0 : -ENODEV;
-}
-
-
-static void __exit vortex_eisa_cleanup(void)
-{
- struct vortex_private *vp;
- void __iomem *ioaddr;
-
-#ifdef CONFIG_EISA
- /* Take care of the EISA devices */
- eisa_driver_unregister(&vortex_eisa_driver);
-#endif
-
- if (compaq_net_device) {
- vp = netdev_priv(compaq_net_device);
- ioaddr = ioport_map(compaq_net_device->base_addr,
- VORTEX_TOTAL_SIZE);
-
- unregister_netdev(compaq_net_device);
- iowrite16(TotalReset, ioaddr + EL3_CMD);
- release_region(compaq_net_device->base_addr,
- VORTEX_TOTAL_SIZE);
-
- free_netdev(compaq_net_device);
- }
-}
-
-
-static void __exit vortex_cleanup(void)
-{
- if (vortex_have_pci)
- pci_unregister_driver(&vortex_driver);
- if (vortex_have_eisa)
- vortex_eisa_cleanup();
-}
-
-
-module_init(vortex_init);
-module_exit(vortex_cleanup);
help
Support for virtual network devices under Sun Logical Domains.
-config NET_VENDOR_3COM
- bool "3COM cards"
- depends on ISA || EISA || MCA || PCI
- help
- If you have a network (Ethernet) card belonging to this class, say Y
- and read the Ethernet-HOWTO, available from
- <http://www.tldp.org/docs.html#howto>.
-
- Note that the answer to this question doesn't directly affect the
- kernel: saying N will just cause the configurator to skip all
- the questions about 3COM cards. If you say Y, you will be asked for
- your specific card in the following questions.
-
-config EL1
- tristate "3c501 \"EtherLink\" support"
- depends on NET_VENDOR_3COM && ISA
- ---help---
- If you have a network (Ethernet) card of this type, say Y and read
- the Ethernet-HOWTO, available from
- <http://www.tldp.org/docs.html#howto>. Also, consider buying a
- new card, since the 3c501 is slow, broken, and obsolete: you will
- have problems. Some people suggest to ping ("man ping") a nearby
- machine every minute ("man cron") when using this card.
-
- To compile this driver as a module, choose M here. The module
- will be called 3c501.
-
config EL2
tristate "3c503 \"EtherLink II\" support"
- depends on NET_VENDOR_3COM && ISA
+ depends on ISA
select CRC32
- help
+ ---help---
If you have a network (Ethernet) card of this type, say Y and read
the Ethernet-HOWTO, available from
<http://www.tldp.org/docs.html#howto>.
config ELPLUS
tristate "3c505 \"EtherLink Plus\" support"
- depends on NET_VENDOR_3COM && ISA && ISA_DMA_API
+ depends on ISA && ISA_DMA_API
---help---
Information about this network (Ethernet) card can be found in
<file:Documentation/networking/3c505.txt>. If you have a card of
config EL16
tristate "3c507 \"EtherLink 16\" support (EXPERIMENTAL)"
- depends on NET_VENDOR_3COM && ISA && EXPERIMENTAL
- help
+ depends on ISA && EXPERIMENTAL
+ ---help---
If you have a network (Ethernet) card of this type, say Y and read
the Ethernet-HOWTO, available from
<http://www.tldp.org/docs.html#howto>.
To compile this driver as a module, choose M here. The module
will be called 3c507.
-config EL3
- tristate "3c509/3c529 (MCA)/3c579 \"EtherLink III\" support"
- depends on NET_VENDOR_3COM && (ISA || EISA || MCA)
- ---help---
- If you have a network (Ethernet) card belonging to the 3Com
- EtherLinkIII series, say Y and read the Ethernet-HOWTO, available
- from <http://www.tldp.org/docs.html#howto>.
-
- If your card is not working you may need to use the DOS
- setup disk to disable Plug & Play mode, and to select the default
- media type.
-
- To compile this driver as a module, choose M here. The module
- will be called 3c509.
-
-config 3C515
- tristate "3c515 ISA \"Fast EtherLink\""
- depends on NET_VENDOR_3COM && (ISA || EISA) && ISA_DMA_API
- help
- If you have a 3Com ISA EtherLink XL "Corkscrew" 3c515 Fast Ethernet
- network card, say Y and read the Ethernet-HOWTO, available from
- <http://www.tldp.org/docs.html#howto>.
-
- To compile this driver as a module, choose M here. The module
- will be called 3c515.
-
config ELMC
tristate "3c523 \"EtherLink/MC\" support"
- depends on NET_VENDOR_3COM && MCA_LEGACY
- help
+ depends on MCA_LEGACY
+ ---help---
If you have a network (Ethernet) card of this type, say Y and read
the Ethernet-HOWTO, available from
<http://www.tldp.org/docs.html#howto>.
config ELMC_II
tristate "3c527 \"EtherLink/MC 32\" support (EXPERIMENTAL)"
- depends on NET_VENDOR_3COM && MCA && MCA_LEGACY
- help
- If you have a network (Ethernet) card of this type, say Y and read
- the Ethernet-HOWTO, available from
- <http://www.tldp.org/docs.html#howto>.
-
- To compile this driver as a module, choose M here. The module
- will be called 3c527.
-
-config VORTEX
- tristate "3c590/3c900 series (592/595/597) \"Vortex/Boomerang\" support"
- depends on NET_VENDOR_3COM && (PCI || EISA)
- select MII
- ---help---
- This option enables driver support for a large number of 10Mbps and
- 10/100Mbps EISA, PCI and PCMCIA 3Com network cards:
-
- "Vortex" (Fast EtherLink 3c590/3c592/3c595/3c597) EISA and PCI
- "Boomerang" (EtherLink XL 3c900 or 3c905) PCI
- "Cyclone" (3c540/3c900/3c905/3c980/3c575/3c656) PCI and Cardbus
- "Tornado" (3c905) PCI
- "Hurricane" (3c555/3cSOHO) PCI
-
- If you have such a card, say Y and read the Ethernet-HOWTO,
- available from <http://www.tldp.org/docs.html#howto>. More
- specific information is in
- <file:Documentation/networking/vortex.txt> and in the comments at
- the beginning of <file:drivers/net/3c59x.c>.
-
- To compile this support as a module, choose M here.
-
-config TYPHOON
- tristate "3cr990 series \"Typhoon\" support"
- depends on NET_VENDOR_3COM && PCI
- select CRC32
+ depends on MCA && MCA_LEGACY
---help---
- This option enables driver support for the 3cr990 series of cards:
-
- 3C990-TX, 3CR990-TX-95, 3CR990-TX-97, 3CR990-FX-95, 3CR990-FX-97,
- 3CR990SVR, 3CR990SVR95, 3CR990SVR97, 3CR990-FX-95 Server,
- 3CR990-FX-97 Server, 3C990B-TX-M, 3C990BSVR
-
If you have a network (Ethernet) card of this type, say Y and read
the Ethernet-HOWTO, available from
<http://www.tldp.org/docs.html#howto>.
To compile this driver as a module, choose M here. The module
- will be called typhoon.
+ will be called 3c527.
config LANCE
tristate "AMD LANCE and PCnet (AT1500 and NE2100) support"
if NETDEV_1000
-config ACENIC
- tristate "Alteon AceNIC/3Com 3C985/NetGear GA620 Gigabit support"
- depends on PCI
- ---help---
- Say Y here if you have an Alteon AceNIC, 3Com 3C985(B), NetGear
- GA620, SGI Gigabit or Farallon PN9000-SX PCI Gigabit Ethernet
- adapter. The driver allows for using the Jumbo Frame option (9000
- bytes/frame) however it requires that your switches can handle this
- as well. To enable Jumbo Frames, add `mtu 9000' to your ifconfig
- line.
-
- To compile this driver as a module, choose M here: the
- module will be called acenic.
-
-config ACENIC_OMIT_TIGON_I
- bool "Omit support for old Tigon I based AceNICs"
- depends on ACENIC
- help
- Say Y here if you only have Tigon II based AceNICs and want to leave
- out support for the older Tigon I based cards which are no longer
- being sold (ie. the original Alteon AceNIC and 3Com 3C985 (non B
- version)). This will reduce the size of the driver object by
- app. 100KB. If you are not sure whether your card is a Tigon I or a
- Tigon II, say N here.
-
- The safe and default value for this is N.
-
config DL2K
tristate "DL2000/TC902x-based Gigabit Ethernet support"
depends on PCI
obj-$(CONFIG_MACE) += mace.o
obj-$(CONFIG_BMAC) += bmac.o
-obj-$(CONFIG_VORTEX) += 3c59x.o
-obj-$(CONFIG_TYPHOON) += typhoon.o
obj-$(CONFIG_NE2K_PCI) += ne2k-pci.o 8390.o
obj-$(CONFIG_PCNET32) += pcnet32.o
obj-$(CONFIG_E100) += e100.o
obj-$(CONFIG_SIS900) += sis900.o
obj-$(CONFIG_R6040) += r6040.o
obj-$(CONFIG_YELLOWFIN) += yellowfin.o
-obj-$(CONFIG_ACENIC) += acenic.o
obj-$(CONFIG_ISERIES_VETH) += iseries_veth.o
obj-$(CONFIG_NATSEMI) += natsemi.o
obj-$(CONFIG_NS83820) += ns83820.o
obj-$(CONFIG_SGISEEQ) += sgiseeq.o
obj-$(CONFIG_SGI_O2MACE_ETH) += meth.o
obj-$(CONFIG_AT1700) += at1700.o
-obj-$(CONFIG_EL1) += 3c501.o
obj-$(CONFIG_EL16) += 3c507.o
obj-$(CONFIG_ELMC) += 3c523.o
obj-$(CONFIG_IBMLANA) += ibmlana.o
obj-$(CONFIG_ELMC_II) += 3c527.o
-obj-$(CONFIG_EL3) += 3c509.o
-obj-$(CONFIG_3C515) += 3c515.o
obj-$(CONFIG_EEXPRESS) += eexpress.o
obj-$(CONFIG_EEXPRESS_PRO) += eepro.o
obj-$(CONFIG_8139CP) += 8139cp.o
obj-$(CONFIG_ARM) += arm/
obj-$(CONFIG_DEV_APPLETALK) += appletalk/
+obj-$(CONFIG_ETHERNET) += ethernet/
obj-$(CONFIG_TR) += tokenring/
obj-$(CONFIG_WAN) += wan/
obj-$(CONFIG_ARCNET) += arcnet/
+++ /dev/null
-/*
- * acenic.c: Linux driver for the Alteon AceNIC Gigabit Ethernet card
- * and other Tigon based cards.
- *
- * Copyright 1998-2002 by Jes Sorensen, <jes@trained-monkey.org>.
- *
- * Thanks to Alteon and 3Com for providing hardware and documentation
- * enabling me to write this driver.
- *
- * A mailing list for discussing the use of this driver has been
- * setup, please subscribe to the lists if you have any questions
- * about the driver. Send mail to linux-acenic-help@sunsite.auc.dk to
- * see how to subscribe.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * Additional credits:
- * Pete Wyckoff <wyckoff@ca.sandia.gov>: Initial Linux/Alpha and trace
- * dump support. The trace dump support has not been
- * integrated yet however.
- * Troy Benjegerdes: Big Endian (PPC) patches.
- * Nate Stahl: Better out of memory handling and stats support.
- * Aman Singla: Nasty race between interrupt handler and tx code dealing
- * with 'testing the tx_ret_csm and setting tx_full'
- * David S. Miller <davem@redhat.com>: conversion to new PCI dma mapping
- * infrastructure and Sparc support
- * Pierrick Pinasseau (CERN): For lending me an Ultra 5 to test the
- * driver under Linux/Sparc64
- * Matt Domsch <Matt_Domsch@dell.com>: Detect Alteon 1000baseT cards
- * ETHTOOL_GDRVINFO support
- * Chip Salzenberg <chip@valinux.com>: Fix race condition between tx
- * handler and close() cleanup.
- * Ken Aaker <kdaaker@rchland.vnet.ibm.com>: Correct check for whether
- * memory mapped IO is enabled to
- * make the driver work on RS/6000.
- * Takayoshi Kouchi <kouchi@hpc.bs1.fc.nec.co.jp>: Identifying problem
- * where the driver would disable
- * bus master mode if it had to disable
- * write and invalidate.
- * Stephen Hack <stephen_hack@hp.com>: Fixed ace_set_mac_addr for little
- * endian systems.
- * Val Henson <vhenson@esscom.com>: Reset Jumbo skb producer and
- * rx producer index when
- * flushing the Jumbo ring.
- * Hans Grobler <grobh@sun.ac.za>: Memory leak fixes in the
- * driver init path.
- * Grant Grundler <grundler@cup.hp.com>: PCI write posting fixes.
- */
-
-#include <linux/module.h>
-#include <linux/moduleparam.h>
-#include <linux/types.h>
-#include <linux/errno.h>
-#include <linux/ioport.h>
-#include <linux/pci.h>
-#include <linux/dma-mapping.h>
-#include <linux/kernel.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/skbuff.h>
-#include <linux/init.h>
-#include <linux/delay.h>
-#include <linux/mm.h>
-#include <linux/highmem.h>
-#include <linux/sockios.h>
-#include <linux/firmware.h>
-#include <linux/slab.h>
-#include <linux/prefetch.h>
-#include <linux/if_vlan.h>
-
-#ifdef SIOCETHTOOL
-#include <linux/ethtool.h>
-#endif
-
-#include <net/sock.h>
-#include <net/ip.h>
-
-#include <asm/system.h>
-#include <asm/io.h>
-#include <asm/irq.h>
-#include <asm/byteorder.h>
-#include <asm/uaccess.h>
-
-
-#define DRV_NAME "acenic"
-
-#undef INDEX_DEBUG
-
-#ifdef CONFIG_ACENIC_OMIT_TIGON_I
-#define ACE_IS_TIGON_I(ap) 0
-#define ACE_TX_RING_ENTRIES(ap) MAX_TX_RING_ENTRIES
-#else
-#define ACE_IS_TIGON_I(ap) (ap->version == 1)
-#define ACE_TX_RING_ENTRIES(ap) ap->tx_ring_entries
-#endif
-
-#ifndef PCI_VENDOR_ID_ALTEON
-#define PCI_VENDOR_ID_ALTEON 0x12ae
-#endif
-#ifndef PCI_DEVICE_ID_ALTEON_ACENIC_FIBRE
-#define PCI_DEVICE_ID_ALTEON_ACENIC_FIBRE 0x0001
-#define PCI_DEVICE_ID_ALTEON_ACENIC_COPPER 0x0002
-#endif
-#ifndef PCI_DEVICE_ID_3COM_3C985
-#define PCI_DEVICE_ID_3COM_3C985 0x0001
-#endif
-#ifndef PCI_VENDOR_ID_NETGEAR
-#define PCI_VENDOR_ID_NETGEAR 0x1385
-#define PCI_DEVICE_ID_NETGEAR_GA620 0x620a
-#endif
-#ifndef PCI_DEVICE_ID_NETGEAR_GA620T
-#define PCI_DEVICE_ID_NETGEAR_GA620T 0x630a
-#endif
-
-
-/*
- * Farallon used the DEC vendor ID by mistake and they seem not
- * to care - stinky!
- */
-#ifndef PCI_DEVICE_ID_FARALLON_PN9000SX
-#define PCI_DEVICE_ID_FARALLON_PN9000SX 0x1a
-#endif
-#ifndef PCI_DEVICE_ID_FARALLON_PN9100T
-#define PCI_DEVICE_ID_FARALLON_PN9100T 0xfa
-#endif
-#ifndef PCI_VENDOR_ID_SGI
-#define PCI_VENDOR_ID_SGI 0x10a9
-#endif
-#ifndef PCI_DEVICE_ID_SGI_ACENIC
-#define PCI_DEVICE_ID_SGI_ACENIC 0x0009
-#endif
-
-static DEFINE_PCI_DEVICE_TABLE(acenic_pci_tbl) = {
- { PCI_VENDOR_ID_ALTEON, PCI_DEVICE_ID_ALTEON_ACENIC_FIBRE,
- PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, },
- { PCI_VENDOR_ID_ALTEON, PCI_DEVICE_ID_ALTEON_ACENIC_COPPER,
- PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, },
- { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3C985,
- PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, },
- { PCI_VENDOR_ID_NETGEAR, PCI_DEVICE_ID_NETGEAR_GA620,
- PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, },
- { PCI_VENDOR_ID_NETGEAR, PCI_DEVICE_ID_NETGEAR_GA620T,
- PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, },
- /*
- * Farallon used the DEC vendor ID on their cards incorrectly,
- * then later Alteon's ID.
- */
- { PCI_VENDOR_ID_DEC, PCI_DEVICE_ID_FARALLON_PN9000SX,
- PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, },
- { PCI_VENDOR_ID_ALTEON, PCI_DEVICE_ID_FARALLON_PN9100T,
- PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, },
- { PCI_VENDOR_ID_SGI, PCI_DEVICE_ID_SGI_ACENIC,
- PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, },
- { }
-};
-MODULE_DEVICE_TABLE(pci, acenic_pci_tbl);
-
-#define ace_sync_irq(irq) synchronize_irq(irq)
-
-#ifndef offset_in_page
-#define offset_in_page(ptr) ((unsigned long)(ptr) & ~PAGE_MASK)
-#endif
-
-#define ACE_MAX_MOD_PARMS 8
-#define BOARD_IDX_STATIC 0
-#define BOARD_IDX_OVERFLOW -1
-
-#include "acenic.h"
-
-/*
- * These must be defined before the firmware is included.
- */
-#define MAX_TEXT_LEN 96*1024
-#define MAX_RODATA_LEN 8*1024
-#define MAX_DATA_LEN 2*1024
-
-#ifndef tigon2FwReleaseLocal
-#define tigon2FwReleaseLocal 0
-#endif
-
-/*
- * This driver currently supports Tigon I and Tigon II based cards
- * including the Alteon AceNIC, the 3Com 3C985[B] and NetGear
- * GA620. The driver should also work on the SGI, DEC and Farallon
- * versions of the card, however I have not been able to test that
- * myself.
- *
- * This card is really neat, it supports receive hardware checksumming
- * and jumbo frames (up to 9000 bytes) and does a lot of work in the
- * firmware. Also the programming interface is quite neat, except for
- * the parts dealing with the i2c eeprom on the card ;-)
- *
- * Using jumbo frames:
- *
- * To enable jumbo frames, simply specify an mtu between 1500 and 9000
- * bytes to ifconfig. Jumbo frames can be enabled or disabled at any time
- * by running `ifconfig eth<X> mtu <MTU>' with <X> being the Ethernet
- * interface number and <MTU> being the MTU value.
- *
- * Module parameters:
- *
- * When compiled as a loadable module, the driver allows for a number
- * of module parameters to be specified. The driver supports the
- * following module parameters:
- *
- * trace=<val> - Firmware trace level. This requires special traced
- * firmware to replace the firmware supplied with
- * the driver - for debugging purposes only.
- *
- * link=<val> - Link state. Normally you want to use the default link
- * parameters set by the driver. This can be used to
- * override these in case your switch doesn't negotiate
- * the link properly. Valid values are:
- * 0x0001 - Force half duplex link.
- * 0x0002 - Do not negotiate line speed with the other end.
- * 0x0010 - 10Mbit/sec link.
- * 0x0020 - 100Mbit/sec link.
- * 0x0040 - 1000Mbit/sec link.
- * 0x0100 - Do not negotiate flow control.
- * 0x0200 - Enable RX flow control Y
- * 0x0400 - Enable TX flow control Y (Tigon II NICs only).
- * Default value is 0x0270, ie. enable link+flow
- * control negotiation. Negotiating the highest
- * possible link speed with RX flow control enabled.
- *
- * When disabling link speed negotiation, only one link
- * speed is allowed to be specified!
- *
- * tx_coal_tick=<val> - number of coalescing clock ticks (us) allowed
- * to wait for more packets to arive before
- * interrupting the host, from the time the first
- * packet arrives.
- *
- * rx_coal_tick=<val> - number of coalescing clock ticks (us) allowed
- * to wait for more packets to arive in the transmit ring,
- * before interrupting the host, after transmitting the
- * first packet in the ring.
- *
- * max_tx_desc=<val> - maximum number of transmit descriptors
- * (packets) transmitted before interrupting the host.
- *
- * max_rx_desc=<val> - maximum number of receive descriptors
- * (packets) received before interrupting the host.
- *
- * tx_ratio=<val> - 7 bit value (0 - 63) specifying the split in 64th
- * increments of the NIC's on board memory to be used for
- * transmit and receive buffers. For the 1MB NIC app. 800KB
- * is available, on the 1/2MB NIC app. 300KB is available.
- * 68KB will always be available as a minimum for both
- * directions. The default value is a 50/50 split.
- * dis_pci_mem_inval=<val> - disable PCI memory write and invalidate
- * operations, default (1) is to always disable this as
- * that is what Alteon does on NT. I have not been able
- * to measure any real performance differences with
- * this on my systems. Set <val>=0 if you want to
- * enable these operations.
- *
- * If you use more than one NIC, specify the parameters for the
- * individual NICs with a comma, ie. trace=0,0x00001fff,0 you want to
- * run tracing on NIC #2 but not on NIC #1 and #3.
- *
- * TODO:
- *
- * - Proper multicast support.
- * - NIC dump support.
- * - More tuning parameters.
- *
- * The mini ring is not used under Linux and I am not sure it makes sense
- * to actually use it.
- *
- * New interrupt handler strategy:
- *
- * The old interrupt handler worked using the traditional method of
- * replacing an skbuff with a new one when a packet arrives. However
- * the rx rings do not need to contain a static number of buffer
- * descriptors, thus it makes sense to move the memory allocation out
- * of the main interrupt handler and do it in a bottom half handler
- * and only allocate new buffers when the number of buffers in the
- * ring is below a certain threshold. In order to avoid starving the
- * NIC under heavy load it is however necessary to force allocation
- * when hitting a minimum threshold. The strategy for alloction is as
- * follows:
- *
- * RX_LOW_BUF_THRES - allocate buffers in the bottom half
- * RX_PANIC_LOW_THRES - we are very low on buffers, allocate
- * the buffers in the interrupt handler
- * RX_RING_THRES - maximum number of buffers in the rx ring
- * RX_MINI_THRES - maximum number of buffers in the mini ring
- * RX_JUMBO_THRES - maximum number of buffers in the jumbo ring
- *
- * One advantagous side effect of this allocation approach is that the
- * entire rx processing can be done without holding any spin lock
- * since the rx rings and registers are totally independent of the tx
- * ring and its registers. This of course includes the kmalloc's of
- * new skb's. Thus start_xmit can run in parallel with rx processing
- * and the memory allocation on SMP systems.
- *
- * Note that running the skb reallocation in a bottom half opens up
- * another can of races which needs to be handled properly. In
- * particular it can happen that the interrupt handler tries to run
- * the reallocation while the bottom half is either running on another
- * CPU or was interrupted on the same CPU. To get around this the
- * driver uses bitops to prevent the reallocation routines from being
- * reentered.
- *
- * TX handling can also be done without holding any spin lock, wheee
- * this is fun! since tx_ret_csm is only written to by the interrupt
- * handler. The case to be aware of is when shutting down the device
- * and cleaning up where it is necessary to make sure that
- * start_xmit() is not running while this is happening. Well DaveM
- * informs me that this case is already protected against ... bye bye
- * Mr. Spin Lock, it was nice to know you.
- *
- * TX interrupts are now partly disabled so the NIC will only generate
- * TX interrupts for the number of coal ticks, not for the number of
- * TX packets in the queue. This should reduce the number of TX only,
- * ie. when no RX processing is done, interrupts seen.
- */
-
-/*
- * Threshold values for RX buffer allocation - the low water marks for
- * when to start refilling the rings are set to 75% of the ring
- * sizes. It seems to make sense to refill the rings entirely from the
- * intrrupt handler once it gets below the panic threshold, that way
- * we don't risk that the refilling is moved to another CPU when the
- * one running the interrupt handler just got the slab code hot in its
- * cache.
- */
-#define RX_RING_SIZE 72
-#define RX_MINI_SIZE 64
-#define RX_JUMBO_SIZE 48
-
-#define RX_PANIC_STD_THRES 16
-#define RX_PANIC_STD_REFILL (3*RX_PANIC_STD_THRES)/2
-#define RX_LOW_STD_THRES (3*RX_RING_SIZE)/4
-#define RX_PANIC_MINI_THRES 12
-#define RX_PANIC_MINI_REFILL (3*RX_PANIC_MINI_THRES)/2
-#define RX_LOW_MINI_THRES (3*RX_MINI_SIZE)/4
-#define RX_PANIC_JUMBO_THRES 6
-#define RX_PANIC_JUMBO_REFILL (3*RX_PANIC_JUMBO_THRES)/2
-#define RX_LOW_JUMBO_THRES (3*RX_JUMBO_SIZE)/4
-
-
-/*
- * Size of the mini ring entries, basically these just should be big
- * enough to take TCP ACKs
- */
-#define ACE_MINI_SIZE 100
-
-#define ACE_MINI_BUFSIZE ACE_MINI_SIZE
-#define ACE_STD_BUFSIZE (ACE_STD_MTU + ETH_HLEN + 4)
-#define ACE_JUMBO_BUFSIZE (ACE_JUMBO_MTU + ETH_HLEN + 4)
-
-/*
- * There seems to be a magic difference in the effect between 995 and 996
- * but little difference between 900 and 995 ... no idea why.
- *
- * There is now a default set of tuning parameters which is set, depending
- * on whether or not the user enables Jumbo frames. It's assumed that if
- * Jumbo frames are enabled, the user wants optimal tuning for that case.
- */
-#define DEF_TX_COAL 400 /* 996 */
-#define DEF_TX_MAX_DESC 60 /* was 40 */
-#define DEF_RX_COAL 120 /* 1000 */
-#define DEF_RX_MAX_DESC 25
-#define DEF_TX_RATIO 21 /* 24 */
-
-#define DEF_JUMBO_TX_COAL 20
-#define DEF_JUMBO_TX_MAX_DESC 60
-#define DEF_JUMBO_RX_COAL 30
-#define DEF_JUMBO_RX_MAX_DESC 6
-#define DEF_JUMBO_TX_RATIO 21
-
-#if tigon2FwReleaseLocal < 20001118
-/*
- * Standard firmware and early modifications duplicate
- * IRQ load without this flag (coal timer is never reset).
- * Note that with this flag tx_coal should be less than
- * time to xmit full tx ring.
- * 400usec is not so bad for tx ring size of 128.
- */
-#define TX_COAL_INTS_ONLY 1 /* worth it */
-#else
-/*
- * With modified firmware, this is not necessary, but still useful.
- */
-#define TX_COAL_INTS_ONLY 1
-#endif
-
-#define DEF_TRACE 0
-#define DEF_STAT (2 * TICKS_PER_SEC)
-
-
-static int link_state[ACE_MAX_MOD_PARMS];
-static int trace[ACE_MAX_MOD_PARMS];
-static int tx_coal_tick[ACE_MAX_MOD_PARMS];
-static int rx_coal_tick[ACE_MAX_MOD_PARMS];
-static int max_tx_desc[ACE_MAX_MOD_PARMS];
-static int max_rx_desc[ACE_MAX_MOD_PARMS];
-static int tx_ratio[ACE_MAX_MOD_PARMS];
-static int dis_pci_mem_inval[ACE_MAX_MOD_PARMS] = {1, 1, 1, 1, 1, 1, 1, 1};
-
-MODULE_AUTHOR("Jes Sorensen <jes@trained-monkey.org>");
-MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("AceNIC/3C985/GA620 Gigabit Ethernet driver");
-#ifndef CONFIG_ACENIC_OMIT_TIGON_I
-MODULE_FIRMWARE("acenic/tg1.bin");
-#endif
-MODULE_FIRMWARE("acenic/tg2.bin");
-
-module_param_array_named(link, link_state, int, NULL, 0);
-module_param_array(trace, int, NULL, 0);
-module_param_array(tx_coal_tick, int, NULL, 0);
-module_param_array(max_tx_desc, int, NULL, 0);
-module_param_array(rx_coal_tick, int, NULL, 0);
-module_param_array(max_rx_desc, int, NULL, 0);
-module_param_array(tx_ratio, int, NULL, 0);
-MODULE_PARM_DESC(link, "AceNIC/3C985/NetGear link state");
-MODULE_PARM_DESC(trace, "AceNIC/3C985/NetGear firmware trace level");
-MODULE_PARM_DESC(tx_coal_tick, "AceNIC/3C985/GA620 max clock ticks to wait from first tx descriptor arrives");
-MODULE_PARM_DESC(max_tx_desc, "AceNIC/3C985/GA620 max number of transmit descriptors to wait");
-MODULE_PARM_DESC(rx_coal_tick, "AceNIC/3C985/GA620 max clock ticks to wait from first rx descriptor arrives");
-MODULE_PARM_DESC(max_rx_desc, "AceNIC/3C985/GA620 max number of receive descriptors to wait");
-MODULE_PARM_DESC(tx_ratio, "AceNIC/3C985/GA620 ratio of NIC memory used for TX/RX descriptors (range 0-63)");
-
-
-static const char version[] __devinitconst =
- "acenic.c: v0.92 08/05/2002 Jes Sorensen, linux-acenic@SunSITE.dk\n"
- " http://home.cern.ch/~jes/gige/acenic.html\n";
-
-static int ace_get_settings(struct net_device *, struct ethtool_cmd *);
-static int ace_set_settings(struct net_device *, struct ethtool_cmd *);
-static void ace_get_drvinfo(struct net_device *, struct ethtool_drvinfo *);
-
-static const struct ethtool_ops ace_ethtool_ops = {
- .get_settings = ace_get_settings,
- .set_settings = ace_set_settings,
- .get_drvinfo = ace_get_drvinfo,
-};
-
-static void ace_watchdog(struct net_device *dev);
-
-static const struct net_device_ops ace_netdev_ops = {
- .ndo_open = ace_open,
- .ndo_stop = ace_close,
- .ndo_tx_timeout = ace_watchdog,
- .ndo_get_stats = ace_get_stats,
- .ndo_start_xmit = ace_start_xmit,
- .ndo_set_multicast_list = ace_set_multicast_list,
- .ndo_validate_addr = eth_validate_addr,
- .ndo_set_mac_address = ace_set_mac_addr,
- .ndo_change_mtu = ace_change_mtu,
-};
-
-static int __devinit acenic_probe_one(struct pci_dev *pdev,
- const struct pci_device_id *id)
-{
- struct net_device *dev;
- struct ace_private *ap;
- static int boards_found;
-
- dev = alloc_etherdev(sizeof(struct ace_private));
- if (dev == NULL) {
- printk(KERN_ERR "acenic: Unable to allocate "
- "net_device structure!\n");
- return -ENOMEM;
- }
-
- SET_NETDEV_DEV(dev, &pdev->dev);
-
- ap = netdev_priv(dev);
- ap->pdev = pdev;
- ap->name = pci_name(pdev);
-
- dev->features |= NETIF_F_SG | NETIF_F_IP_CSUM;
- dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;
-
- dev->watchdog_timeo = 5*HZ;
-
- dev->netdev_ops = &ace_netdev_ops;
- SET_ETHTOOL_OPS(dev, &ace_ethtool_ops);
-
- /* we only display this string ONCE */
- if (!boards_found)
- printk(version);
-
- if (pci_enable_device(pdev))
- goto fail_free_netdev;
-
- /*
- * Enable master mode before we start playing with the
- * pci_command word since pci_set_master() will modify
- * it.
- */
- pci_set_master(pdev);
-
- pci_read_config_word(pdev, PCI_COMMAND, &ap->pci_command);
-
- /* OpenFirmware on Mac's does not set this - DOH.. */
- if (!(ap->pci_command & PCI_COMMAND_MEMORY)) {
- printk(KERN_INFO "%s: Enabling PCI Memory Mapped "
- "access - was not enabled by BIOS/Firmware\n",
- ap->name);
- ap->pci_command = ap->pci_command | PCI_COMMAND_MEMORY;
- pci_write_config_word(ap->pdev, PCI_COMMAND,
- ap->pci_command);
- wmb();
- }
-
- pci_read_config_byte(pdev, PCI_LATENCY_TIMER, &ap->pci_latency);
- if (ap->pci_latency <= 0x40) {
- ap->pci_latency = 0x40;
- pci_write_config_byte(pdev, PCI_LATENCY_TIMER, ap->pci_latency);
- }
-
- /*
- * Remap the regs into kernel space - this is abuse of
- * dev->base_addr since it was means for I/O port
- * addresses but who gives a damn.
- */
- dev->base_addr = pci_resource_start(pdev, 0);
- ap->regs = ioremap(dev->base_addr, 0x4000);
- if (!ap->regs) {
- printk(KERN_ERR "%s: Unable to map I/O register, "
- "AceNIC %i will be disabled.\n",
- ap->name, boards_found);
- goto fail_free_netdev;
- }
-
- switch(pdev->vendor) {
- case PCI_VENDOR_ID_ALTEON:
- if (pdev->device == PCI_DEVICE_ID_FARALLON_PN9100T) {
- printk(KERN_INFO "%s: Farallon PN9100-T ",
- ap->name);
- } else {
- printk(KERN_INFO "%s: Alteon AceNIC ",
- ap->name);
- }
- break;
- case PCI_VENDOR_ID_3COM:
- printk(KERN_INFO "%s: 3Com 3C985 ", ap->name);
- break;
- case PCI_VENDOR_ID_NETGEAR:
- printk(KERN_INFO "%s: NetGear GA620 ", ap->name);
- break;
- case PCI_VENDOR_ID_DEC:
- if (pdev->device == PCI_DEVICE_ID_FARALLON_PN9000SX) {
- printk(KERN_INFO "%s: Farallon PN9000-SX ",
- ap->name);
- break;
- }
- case PCI_VENDOR_ID_SGI:
- printk(KERN_INFO "%s: SGI AceNIC ", ap->name);
- break;
- default:
- printk(KERN_INFO "%s: Unknown AceNIC ", ap->name);
- break;
- }
-
- printk("Gigabit Ethernet at 0x%08lx, ", dev->base_addr);
- printk("irq %d\n", pdev->irq);
-
-#ifdef CONFIG_ACENIC_OMIT_TIGON_I
- if ((readl(&ap->regs->HostCtrl) >> 28) == 4) {
- printk(KERN_ERR "%s: Driver compiled without Tigon I"
- " support - NIC disabled\n", dev->name);
- goto fail_uninit;
- }
-#endif
-
- if (ace_allocate_descriptors(dev))
- goto fail_free_netdev;
-
-#ifdef MODULE
- if (boards_found >= ACE_MAX_MOD_PARMS)
- ap->board_idx = BOARD_IDX_OVERFLOW;
- else
- ap->board_idx = boards_found;
-#else
- ap->board_idx = BOARD_IDX_STATIC;
-#endif
-
- if (ace_init(dev))
- goto fail_free_netdev;
-
- if (register_netdev(dev)) {
- printk(KERN_ERR "acenic: device registration failed\n");
- goto fail_uninit;
- }
- ap->name = dev->name;
-
- if (ap->pci_using_dac)
- dev->features |= NETIF_F_HIGHDMA;
-
- pci_set_drvdata(pdev, dev);
-
- boards_found++;
- return 0;
-
- fail_uninit:
- ace_init_cleanup(dev);
- fail_free_netdev:
- free_netdev(dev);
- return -ENODEV;
-}
-
-static void __devexit acenic_remove_one(struct pci_dev *pdev)
-{
- struct net_device *dev = pci_get_drvdata(pdev);
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
- short i;
-
- unregister_netdev(dev);
-
- writel(readl(®s->CpuCtrl) | CPU_HALT, ®s->CpuCtrl);
- if (ap->version >= 2)
- writel(readl(®s->CpuBCtrl) | CPU_HALT, ®s->CpuBCtrl);
-
- /*
- * This clears any pending interrupts
- */
- writel(1, ®s->Mb0Lo);
- readl(®s->CpuCtrl); /* flush */
-
- /*
- * Make sure no other CPUs are processing interrupts
- * on the card before the buffers are being released.
- * Otherwise one might experience some `interesting'
- * effects.
- *
- * Then release the RX buffers - jumbo buffers were
- * already released in ace_close().
- */
- ace_sync_irq(dev->irq);
-
- for (i = 0; i < RX_STD_RING_ENTRIES; i++) {
- struct sk_buff *skb = ap->skb->rx_std_skbuff[i].skb;
-
- if (skb) {
- struct ring_info *ringp;
- dma_addr_t mapping;
-
- ringp = &ap->skb->rx_std_skbuff[i];
- mapping = dma_unmap_addr(ringp, mapping);
- pci_unmap_page(ap->pdev, mapping,
- ACE_STD_BUFSIZE,
- PCI_DMA_FROMDEVICE);
-
- ap->rx_std_ring[i].size = 0;
- ap->skb->rx_std_skbuff[i].skb = NULL;
- dev_kfree_skb(skb);
- }
- }
-
- if (ap->version >= 2) {
- for (i = 0; i < RX_MINI_RING_ENTRIES; i++) {
- struct sk_buff *skb = ap->skb->rx_mini_skbuff[i].skb;
-
- if (skb) {
- struct ring_info *ringp;
- dma_addr_t mapping;
-
- ringp = &ap->skb->rx_mini_skbuff[i];
- mapping = dma_unmap_addr(ringp,mapping);
- pci_unmap_page(ap->pdev, mapping,
- ACE_MINI_BUFSIZE,
- PCI_DMA_FROMDEVICE);
-
- ap->rx_mini_ring[i].size = 0;
- ap->skb->rx_mini_skbuff[i].skb = NULL;
- dev_kfree_skb(skb);
- }
- }
- }
-
- for (i = 0; i < RX_JUMBO_RING_ENTRIES; i++) {
- struct sk_buff *skb = ap->skb->rx_jumbo_skbuff[i].skb;
- if (skb) {
- struct ring_info *ringp;
- dma_addr_t mapping;
-
- ringp = &ap->skb->rx_jumbo_skbuff[i];
- mapping = dma_unmap_addr(ringp, mapping);
- pci_unmap_page(ap->pdev, mapping,
- ACE_JUMBO_BUFSIZE,
- PCI_DMA_FROMDEVICE);
-
- ap->rx_jumbo_ring[i].size = 0;
- ap->skb->rx_jumbo_skbuff[i].skb = NULL;
- dev_kfree_skb(skb);
- }
- }
-
- ace_init_cleanup(dev);
- free_netdev(dev);
-}
-
-static struct pci_driver acenic_pci_driver = {
- .name = "acenic",
- .id_table = acenic_pci_tbl,
- .probe = acenic_probe_one,
- .remove = __devexit_p(acenic_remove_one),
-};
-
-static int __init acenic_init(void)
-{
- return pci_register_driver(&acenic_pci_driver);
-}
-
-static void __exit acenic_exit(void)
-{
- pci_unregister_driver(&acenic_pci_driver);
-}
-
-module_init(acenic_init);
-module_exit(acenic_exit);
-
-static void ace_free_descriptors(struct net_device *dev)
-{
- struct ace_private *ap = netdev_priv(dev);
- int size;
-
- if (ap->rx_std_ring != NULL) {
- size = (sizeof(struct rx_desc) *
- (RX_STD_RING_ENTRIES +
- RX_JUMBO_RING_ENTRIES +
- RX_MINI_RING_ENTRIES +
- RX_RETURN_RING_ENTRIES));
- pci_free_consistent(ap->pdev, size, ap->rx_std_ring,
- ap->rx_ring_base_dma);
- ap->rx_std_ring = NULL;
- ap->rx_jumbo_ring = NULL;
- ap->rx_mini_ring = NULL;
- ap->rx_return_ring = NULL;
- }
- if (ap->evt_ring != NULL) {
- size = (sizeof(struct event) * EVT_RING_ENTRIES);
- pci_free_consistent(ap->pdev, size, ap->evt_ring,
- ap->evt_ring_dma);
- ap->evt_ring = NULL;
- }
- if (ap->tx_ring != NULL && !ACE_IS_TIGON_I(ap)) {
- size = (sizeof(struct tx_desc) * MAX_TX_RING_ENTRIES);
- pci_free_consistent(ap->pdev, size, ap->tx_ring,
- ap->tx_ring_dma);
- }
- ap->tx_ring = NULL;
-
- if (ap->evt_prd != NULL) {
- pci_free_consistent(ap->pdev, sizeof(u32),
- (void *)ap->evt_prd, ap->evt_prd_dma);
- ap->evt_prd = NULL;
- }
- if (ap->rx_ret_prd != NULL) {
- pci_free_consistent(ap->pdev, sizeof(u32),
- (void *)ap->rx_ret_prd,
- ap->rx_ret_prd_dma);
- ap->rx_ret_prd = NULL;
- }
- if (ap->tx_csm != NULL) {
- pci_free_consistent(ap->pdev, sizeof(u32),
- (void *)ap->tx_csm, ap->tx_csm_dma);
- ap->tx_csm = NULL;
- }
-}
-
-
-static int ace_allocate_descriptors(struct net_device *dev)
-{
- struct ace_private *ap = netdev_priv(dev);
- int size;
-
- size = (sizeof(struct rx_desc) *
- (RX_STD_RING_ENTRIES +
- RX_JUMBO_RING_ENTRIES +
- RX_MINI_RING_ENTRIES +
- RX_RETURN_RING_ENTRIES));
-
- ap->rx_std_ring = pci_alloc_consistent(ap->pdev, size,
- &ap->rx_ring_base_dma);
- if (ap->rx_std_ring == NULL)
- goto fail;
-
- ap->rx_jumbo_ring = ap->rx_std_ring + RX_STD_RING_ENTRIES;
- ap->rx_mini_ring = ap->rx_jumbo_ring + RX_JUMBO_RING_ENTRIES;
- ap->rx_return_ring = ap->rx_mini_ring + RX_MINI_RING_ENTRIES;
-
- size = (sizeof(struct event) * EVT_RING_ENTRIES);
-
- ap->evt_ring = pci_alloc_consistent(ap->pdev, size, &ap->evt_ring_dma);
-
- if (ap->evt_ring == NULL)
- goto fail;
-
- /*
- * Only allocate a host TX ring for the Tigon II, the Tigon I
- * has to use PCI registers for this ;-(
- */
- if (!ACE_IS_TIGON_I(ap)) {
- size = (sizeof(struct tx_desc) * MAX_TX_RING_ENTRIES);
-
- ap->tx_ring = pci_alloc_consistent(ap->pdev, size,
- &ap->tx_ring_dma);
-
- if (ap->tx_ring == NULL)
- goto fail;
- }
-
- ap->evt_prd = pci_alloc_consistent(ap->pdev, sizeof(u32),
- &ap->evt_prd_dma);
- if (ap->evt_prd == NULL)
- goto fail;
-
- ap->rx_ret_prd = pci_alloc_consistent(ap->pdev, sizeof(u32),
- &ap->rx_ret_prd_dma);
- if (ap->rx_ret_prd == NULL)
- goto fail;
-
- ap->tx_csm = pci_alloc_consistent(ap->pdev, sizeof(u32),
- &ap->tx_csm_dma);
- if (ap->tx_csm == NULL)
- goto fail;
-
- return 0;
-
-fail:
- /* Clean up. */
- ace_init_cleanup(dev);
- return 1;
-}
-
-
-/*
- * Generic cleanup handling data allocated during init. Used when the
- * module is unloaded or if an error occurs during initialization
- */
-static void ace_init_cleanup(struct net_device *dev)
-{
- struct ace_private *ap;
-
- ap = netdev_priv(dev);
-
- ace_free_descriptors(dev);
-
- if (ap->info)
- pci_free_consistent(ap->pdev, sizeof(struct ace_info),
- ap->info, ap->info_dma);
- kfree(ap->skb);
- kfree(ap->trace_buf);
-
- if (dev->irq)
- free_irq(dev->irq, dev);
-
- iounmap(ap->regs);
-}
-
-
-/*
- * Commands are considered to be slow.
- */
-static inline void ace_issue_cmd(struct ace_regs __iomem *regs, struct cmd *cmd)
-{
- u32 idx;
-
- idx = readl(®s->CmdPrd);
-
- writel(*(u32 *)(cmd), ®s->CmdRng[idx]);
- idx = (idx + 1) % CMD_RING_ENTRIES;
-
- writel(idx, ®s->CmdPrd);
-}
-
-
-static int __devinit ace_init(struct net_device *dev)
-{
- struct ace_private *ap;
- struct ace_regs __iomem *regs;
- struct ace_info *info = NULL;
- struct pci_dev *pdev;
- unsigned long myjif;
- u64 tmp_ptr;
- u32 tig_ver, mac1, mac2, tmp, pci_state;
- int board_idx, ecode = 0;
- short i;
- unsigned char cache_size;
-
- ap = netdev_priv(dev);
- regs = ap->regs;
-
- board_idx = ap->board_idx;
-
- /*
- * aman@sgi.com - its useful to do a NIC reset here to
- * address the `Firmware not running' problem subsequent
- * to any crashes involving the NIC
- */
- writel(HW_RESET | (HW_RESET << 24), ®s->HostCtrl);
- readl(®s->HostCtrl); /* PCI write posting */
- udelay(5);
-
- /*
- * Don't access any other registers before this point!
- */
-#ifdef __BIG_ENDIAN
- /*
- * This will most likely need BYTE_SWAP once we switch
- * to using __raw_writel()
- */
- writel((WORD_SWAP | CLR_INT | ((WORD_SWAP | CLR_INT) << 24)),
- ®s->HostCtrl);
-#else
- writel((CLR_INT | WORD_SWAP | ((CLR_INT | WORD_SWAP) << 24)),
- ®s->HostCtrl);
-#endif
- readl(®s->HostCtrl); /* PCI write posting */
-
- /*
- * Stop the NIC CPU and clear pending interrupts
- */
- writel(readl(®s->CpuCtrl) | CPU_HALT, ®s->CpuCtrl);
- readl(®s->CpuCtrl); /* PCI write posting */
- writel(0, ®s->Mb0Lo);
-
- tig_ver = readl(®s->HostCtrl) >> 28;
-
- switch(tig_ver){
-#ifndef CONFIG_ACENIC_OMIT_TIGON_I
- case 4:
- case 5:
- printk(KERN_INFO " Tigon I (Rev. %i), Firmware: %i.%i.%i, ",
- tig_ver, ap->firmware_major, ap->firmware_minor,
- ap->firmware_fix);
- writel(0, ®s->LocalCtrl);
- ap->version = 1;
- ap->tx_ring_entries = TIGON_I_TX_RING_ENTRIES;
- break;
-#endif
- case 6:
- printk(KERN_INFO " Tigon II (Rev. %i), Firmware: %i.%i.%i, ",
- tig_ver, ap->firmware_major, ap->firmware_minor,
- ap->firmware_fix);
- writel(readl(®s->CpuBCtrl) | CPU_HALT, ®s->CpuBCtrl);
- readl(®s->CpuBCtrl); /* PCI write posting */
- /*
- * The SRAM bank size does _not_ indicate the amount
- * of memory on the card, it controls the _bank_ size!
- * Ie. a 1MB AceNIC will have two banks of 512KB.
- */
- writel(SRAM_BANK_512K, ®s->LocalCtrl);
- writel(SYNC_SRAM_TIMING, ®s->MiscCfg);
- ap->version = 2;
- ap->tx_ring_entries = MAX_TX_RING_ENTRIES;
- break;
- default:
- printk(KERN_WARNING " Unsupported Tigon version detected "
- "(%i)\n", tig_ver);
- ecode = -ENODEV;
- goto init_error;
- }
-
- /*
- * ModeStat _must_ be set after the SRAM settings as this change
- * seems to corrupt the ModeStat and possible other registers.
- * The SRAM settings survive resets and setting it to the same
- * value a second time works as well. This is what caused the
- * `Firmware not running' problem on the Tigon II.
- */
-#ifdef __BIG_ENDIAN
- writel(ACE_BYTE_SWAP_DMA | ACE_WARN | ACE_FATAL | ACE_BYTE_SWAP_BD |
- ACE_WORD_SWAP_BD | ACE_NO_JUMBO_FRAG, ®s->ModeStat);
-#else
- writel(ACE_BYTE_SWAP_DMA | ACE_WARN | ACE_FATAL |
- ACE_WORD_SWAP_BD | ACE_NO_JUMBO_FRAG, ®s->ModeStat);
-#endif
- readl(®s->ModeStat); /* PCI write posting */
-
- mac1 = 0;
- for(i = 0; i < 4; i++) {
- int t;
-
- mac1 = mac1 << 8;
- t = read_eeprom_byte(dev, 0x8c+i);
- if (t < 0) {
- ecode = -EIO;
- goto init_error;
- } else
- mac1 |= (t & 0xff);
- }
- mac2 = 0;
- for(i = 4; i < 8; i++) {
- int t;
-
- mac2 = mac2 << 8;
- t = read_eeprom_byte(dev, 0x8c+i);
- if (t < 0) {
- ecode = -EIO;
- goto init_error;
- } else
- mac2 |= (t & 0xff);
- }
-
- writel(mac1, ®s->MacAddrHi);
- writel(mac2, ®s->MacAddrLo);
-
- dev->dev_addr[0] = (mac1 >> 8) & 0xff;
- dev->dev_addr[1] = mac1 & 0xff;
- dev->dev_addr[2] = (mac2 >> 24) & 0xff;
- dev->dev_addr[3] = (mac2 >> 16) & 0xff;
- dev->dev_addr[4] = (mac2 >> 8) & 0xff;
- dev->dev_addr[5] = mac2 & 0xff;
-
- printk("MAC: %pM\n", dev->dev_addr);
-
- /*
- * Looks like this is necessary to deal with on all architectures,
- * even this %$#%$# N440BX Intel based thing doesn't get it right.
- * Ie. having two NICs in the machine, one will have the cache
- * line set at boot time, the other will not.
- */
- pdev = ap->pdev;
- pci_read_config_byte(pdev, PCI_CACHE_LINE_SIZE, &cache_size);
- cache_size <<= 2;
- if (cache_size != SMP_CACHE_BYTES) {
- printk(KERN_INFO " PCI cache line size set incorrectly "
- "(%i bytes) by BIOS/FW, ", cache_size);
- if (cache_size > SMP_CACHE_BYTES)
- printk("expecting %i\n", SMP_CACHE_BYTES);
- else {
- printk("correcting to %i\n", SMP_CACHE_BYTES);
- pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE,
- SMP_CACHE_BYTES >> 2);
- }
- }
-
- pci_state = readl(®s->PciState);
- printk(KERN_INFO " PCI bus width: %i bits, speed: %iMHz, "
- "latency: %i clks\n",
- (pci_state & PCI_32BIT) ? 32 : 64,
- (pci_state & PCI_66MHZ) ? 66 : 33,
- ap->pci_latency);
-
- /*
- * Set the max DMA transfer size. Seems that for most systems
- * the performance is better when no MAX parameter is
- * set. However for systems enabling PCI write and invalidate,
- * DMA writes must be set to the L1 cache line size to get
- * optimal performance.
- *
- * The default is now to turn the PCI write and invalidate off
- * - that is what Alteon does for NT.
- */
- tmp = READ_CMD_MEM | WRITE_CMD_MEM;
- if (ap->version >= 2) {
- tmp |= (MEM_READ_MULTIPLE | (pci_state & PCI_66MHZ));
- /*
- * Tuning parameters only supported for 8 cards
- */
- if (board_idx == BOARD_IDX_OVERFLOW ||
- dis_pci_mem_inval[board_idx]) {
- if (ap->pci_command & PCI_COMMAND_INVALIDATE) {
- ap->pci_command &= ~PCI_COMMAND_INVALIDATE;
- pci_write_config_word(pdev, PCI_COMMAND,
- ap->pci_command);
- printk(KERN_INFO " Disabling PCI memory "
- "write and invalidate\n");
- }
- } else if (ap->pci_command & PCI_COMMAND_INVALIDATE) {
- printk(KERN_INFO " PCI memory write & invalidate "
- "enabled by BIOS, enabling counter measures\n");
-
- switch(SMP_CACHE_BYTES) {
- case 16:
- tmp |= DMA_WRITE_MAX_16;
- break;
- case 32:
- tmp |= DMA_WRITE_MAX_32;
- break;
- case 64:
- tmp |= DMA_WRITE_MAX_64;
- break;
- case 128:
- tmp |= DMA_WRITE_MAX_128;
- break;
- default:
- printk(KERN_INFO " Cache line size %i not "
- "supported, PCI write and invalidate "
- "disabled\n", SMP_CACHE_BYTES);
- ap->pci_command &= ~PCI_COMMAND_INVALIDATE;
- pci_write_config_word(pdev, PCI_COMMAND,
- ap->pci_command);
- }
- }
- }
-
-#ifdef __sparc__
- /*
- * On this platform, we know what the best dma settings
- * are. We use 64-byte maximum bursts, because if we
- * burst larger than the cache line size (or even cross
- * a 64byte boundary in a single burst) the UltraSparc
- * PCI controller will disconnect at 64-byte multiples.
- *
- * Read-multiple will be properly enabled above, and when
- * set will give the PCI controller proper hints about
- * prefetching.
- */
- tmp &= ~DMA_READ_WRITE_MASK;
- tmp |= DMA_READ_MAX_64;
- tmp |= DMA_WRITE_MAX_64;
-#endif
-#ifdef __alpha__
- tmp &= ~DMA_READ_WRITE_MASK;
- tmp |= DMA_READ_MAX_128;
- /*
- * All the docs say MUST NOT. Well, I did.
- * Nothing terrible happens, if we load wrong size.
- * Bit w&i still works better!
- */
- tmp |= DMA_WRITE_MAX_128;
-#endif
- writel(tmp, ®s->PciState);
-
-#if 0
- /*
- * The Host PCI bus controller driver has to set FBB.
- * If all devices on that PCI bus support FBB, then the controller
- * can enable FBB support in the Host PCI Bus controller (or on
- * the PCI-PCI bridge if that applies).
- * -ggg
- */
- /*
- * I have received reports from people having problems when this
- * bit is enabled.
- */
- if (!(ap->pci_command & PCI_COMMAND_FAST_BACK)) {
- printk(KERN_INFO " Enabling PCI Fast Back to Back\n");
- ap->pci_command |= PCI_COMMAND_FAST_BACK;
- pci_write_config_word(pdev, PCI_COMMAND, ap->pci_command);
- }
-#endif
-
- /*
- * Configure DMA attributes.
- */
- if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
- ap->pci_using_dac = 1;
- } else if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) {
- ap->pci_using_dac = 0;
- } else {
- ecode = -ENODEV;
- goto init_error;
- }
-
- /*
- * Initialize the generic info block and the command+event rings
- * and the control blocks for the transmit and receive rings
- * as they need to be setup once and for all.
- */
- if (!(info = pci_alloc_consistent(ap->pdev, sizeof(struct ace_info),
- &ap->info_dma))) {
- ecode = -EAGAIN;
- goto init_error;
- }
- ap->info = info;
-
- /*
- * Get the memory for the skb rings.
- */
- if (!(ap->skb = kmalloc(sizeof(struct ace_skb), GFP_KERNEL))) {
- ecode = -EAGAIN;
- goto init_error;
- }
-
- ecode = request_irq(pdev->irq, ace_interrupt, IRQF_SHARED,
- DRV_NAME, dev);
- if (ecode) {
- printk(KERN_WARNING "%s: Requested IRQ %d is busy\n",
- DRV_NAME, pdev->irq);
- goto init_error;
- } else
- dev->irq = pdev->irq;
-
-#ifdef INDEX_DEBUG
- spin_lock_init(&ap->debug_lock);
- ap->last_tx = ACE_TX_RING_ENTRIES(ap) - 1;
- ap->last_std_rx = 0;
- ap->last_mini_rx = 0;
-#endif
-
- memset(ap->info, 0, sizeof(struct ace_info));
- memset(ap->skb, 0, sizeof(struct ace_skb));
-
- ecode = ace_load_firmware(dev);
- if (ecode)
- goto init_error;
-
- ap->fw_running = 0;
-
- tmp_ptr = ap->info_dma;
- writel(tmp_ptr >> 32, ®s->InfoPtrHi);
- writel(tmp_ptr & 0xffffffff, ®s->InfoPtrLo);
-
- memset(ap->evt_ring, 0, EVT_RING_ENTRIES * sizeof(struct event));
-
- set_aceaddr(&info->evt_ctrl.rngptr, ap->evt_ring_dma);
- info->evt_ctrl.flags = 0;
-
- *(ap->evt_prd) = 0;
- wmb();
- set_aceaddr(&info->evt_prd_ptr, ap->evt_prd_dma);
- writel(0, ®s->EvtCsm);
-
- set_aceaddr(&info->cmd_ctrl.rngptr, 0x100);
- info->cmd_ctrl.flags = 0;
- info->cmd_ctrl.max_len = 0;
-
- for (i = 0; i < CMD_RING_ENTRIES; i++)
- writel(0, ®s->CmdRng[i]);
-
- writel(0, ®s->CmdPrd);
- writel(0, ®s->CmdCsm);
-
- tmp_ptr = ap->info_dma;
- tmp_ptr += (unsigned long) &(((struct ace_info *)0)->s.stats);
- set_aceaddr(&info->stats2_ptr, (dma_addr_t) tmp_ptr);
-
- set_aceaddr(&info->rx_std_ctrl.rngptr, ap->rx_ring_base_dma);
- info->rx_std_ctrl.max_len = ACE_STD_BUFSIZE;
- info->rx_std_ctrl.flags =
- RCB_FLG_TCP_UDP_SUM | RCB_FLG_NO_PSEUDO_HDR | RCB_FLG_VLAN_ASSIST;
-
- memset(ap->rx_std_ring, 0,
- RX_STD_RING_ENTRIES * sizeof(struct rx_desc));
-
- for (i = 0; i < RX_STD_RING_ENTRIES; i++)
- ap->rx_std_ring[i].flags = BD_FLG_TCP_UDP_SUM;
-
- ap->rx_std_skbprd = 0;
- atomic_set(&ap->cur_rx_bufs, 0);
-
- set_aceaddr(&info->rx_jumbo_ctrl.rngptr,
- (ap->rx_ring_base_dma +
- (sizeof(struct rx_desc) * RX_STD_RING_ENTRIES)));
- info->rx_jumbo_ctrl.max_len = 0;
- info->rx_jumbo_ctrl.flags =
- RCB_FLG_TCP_UDP_SUM | RCB_FLG_NO_PSEUDO_HDR | RCB_FLG_VLAN_ASSIST;
-
- memset(ap->rx_jumbo_ring, 0,
- RX_JUMBO_RING_ENTRIES * sizeof(struct rx_desc));
-
- for (i = 0; i < RX_JUMBO_RING_ENTRIES; i++)
- ap->rx_jumbo_ring[i].flags = BD_FLG_TCP_UDP_SUM | BD_FLG_JUMBO;
-
- ap->rx_jumbo_skbprd = 0;
- atomic_set(&ap->cur_jumbo_bufs, 0);
-
- memset(ap->rx_mini_ring, 0,
- RX_MINI_RING_ENTRIES * sizeof(struct rx_desc));
-
- if (ap->version >= 2) {
- set_aceaddr(&info->rx_mini_ctrl.rngptr,
- (ap->rx_ring_base_dma +
- (sizeof(struct rx_desc) *
- (RX_STD_RING_ENTRIES +
- RX_JUMBO_RING_ENTRIES))));
- info->rx_mini_ctrl.max_len = ACE_MINI_SIZE;
- info->rx_mini_ctrl.flags =
- RCB_FLG_TCP_UDP_SUM|RCB_FLG_NO_PSEUDO_HDR|RCB_FLG_VLAN_ASSIST;
-
- for (i = 0; i < RX_MINI_RING_ENTRIES; i++)
- ap->rx_mini_ring[i].flags =
- BD_FLG_TCP_UDP_SUM | BD_FLG_MINI;
- } else {
- set_aceaddr(&info->rx_mini_ctrl.rngptr, 0);
- info->rx_mini_ctrl.flags = RCB_FLG_RNG_DISABLE;
- info->rx_mini_ctrl.max_len = 0;
- }
-
- ap->rx_mini_skbprd = 0;
- atomic_set(&ap->cur_mini_bufs, 0);
-
- set_aceaddr(&info->rx_return_ctrl.rngptr,
- (ap->rx_ring_base_dma +
- (sizeof(struct rx_desc) *
- (RX_STD_RING_ENTRIES +
- RX_JUMBO_RING_ENTRIES +
- RX_MINI_RING_ENTRIES))));
- info->rx_return_ctrl.flags = 0;
- info->rx_return_ctrl.max_len = RX_RETURN_RING_ENTRIES;
-
- memset(ap->rx_return_ring, 0,
- RX_RETURN_RING_ENTRIES * sizeof(struct rx_desc));
-
- set_aceaddr(&info->rx_ret_prd_ptr, ap->rx_ret_prd_dma);
- *(ap->rx_ret_prd) = 0;
-
- writel(TX_RING_BASE, ®s->WinBase);
-
- if (ACE_IS_TIGON_I(ap)) {
- ap->tx_ring = (__force struct tx_desc *) regs->Window;
- for (i = 0; i < (TIGON_I_TX_RING_ENTRIES
- * sizeof(struct tx_desc)) / sizeof(u32); i++)
- writel(0, (__force void __iomem *)ap->tx_ring + i * 4);
-
- set_aceaddr(&info->tx_ctrl.rngptr, TX_RING_BASE);
- } else {
- memset(ap->tx_ring, 0,
- MAX_TX_RING_ENTRIES * sizeof(struct tx_desc));
-
- set_aceaddr(&info->tx_ctrl.rngptr, ap->tx_ring_dma);
- }
-
- info->tx_ctrl.max_len = ACE_TX_RING_ENTRIES(ap);
- tmp = RCB_FLG_TCP_UDP_SUM | RCB_FLG_NO_PSEUDO_HDR | RCB_FLG_VLAN_ASSIST;
-
- /*
- * The Tigon I does not like having the TX ring in host memory ;-(
- */
- if (!ACE_IS_TIGON_I(ap))
- tmp |= RCB_FLG_TX_HOST_RING;
-#if TX_COAL_INTS_ONLY
- tmp |= RCB_FLG_COAL_INT_ONLY;
-#endif
- info->tx_ctrl.flags = tmp;
-
- set_aceaddr(&info->tx_csm_ptr, ap->tx_csm_dma);
-
- /*
- * Potential item for tuning parameter
- */
-#if 0 /* NO */
- writel(DMA_THRESH_16W, ®s->DmaReadCfg);
- writel(DMA_THRESH_16W, ®s->DmaWriteCfg);
-#else
- writel(DMA_THRESH_8W, ®s->DmaReadCfg);
- writel(DMA_THRESH_8W, ®s->DmaWriteCfg);
-#endif
-
- writel(0, ®s->MaskInt);
- writel(1, ®s->IfIdx);
-#if 0
- /*
- * McKinley boxes do not like us fiddling with AssistState
- * this early
- */
- writel(1, ®s->AssistState);
-#endif
-
- writel(DEF_STAT, ®s->TuneStatTicks);
- writel(DEF_TRACE, ®s->TuneTrace);
-
- ace_set_rxtx_parms(dev, 0);
-
- if (board_idx == BOARD_IDX_OVERFLOW) {
- printk(KERN_WARNING "%s: more than %i NICs detected, "
- "ignoring module parameters!\n",
- ap->name, ACE_MAX_MOD_PARMS);
- } else if (board_idx >= 0) {
- if (tx_coal_tick[board_idx])
- writel(tx_coal_tick[board_idx],
- ®s->TuneTxCoalTicks);
- if (max_tx_desc[board_idx])
- writel(max_tx_desc[board_idx], ®s->TuneMaxTxDesc);
-
- if (rx_coal_tick[board_idx])
- writel(rx_coal_tick[board_idx],
- ®s->TuneRxCoalTicks);
- if (max_rx_desc[board_idx])
- writel(max_rx_desc[board_idx], ®s->TuneMaxRxDesc);
-
- if (trace[board_idx])
- writel(trace[board_idx], ®s->TuneTrace);
-
- if ((tx_ratio[board_idx] > 0) && (tx_ratio[board_idx] < 64))
- writel(tx_ratio[board_idx], ®s->TxBufRat);
- }
-
- /*
- * Default link parameters
- */
- tmp = LNK_ENABLE | LNK_FULL_DUPLEX | LNK_1000MB | LNK_100MB |
- LNK_10MB | LNK_RX_FLOW_CTL_Y | LNK_NEG_FCTL | LNK_NEGOTIATE;
- if(ap->version >= 2)
- tmp |= LNK_TX_FLOW_CTL_Y;
-
- /*
- * Override link default parameters
- */
- if ((board_idx >= 0) && link_state[board_idx]) {
- int option = link_state[board_idx];
-
- tmp = LNK_ENABLE;
-
- if (option & 0x01) {
- printk(KERN_INFO "%s: Setting half duplex link\n",
- ap->name);
- tmp &= ~LNK_FULL_DUPLEX;
- }
- if (option & 0x02)
- tmp &= ~LNK_NEGOTIATE;
- if (option & 0x10)
- tmp |= LNK_10MB;
- if (option & 0x20)
- tmp |= LNK_100MB;
- if (option & 0x40)
- tmp |= LNK_1000MB;
- if ((option & 0x70) == 0) {
- printk(KERN_WARNING "%s: No media speed specified, "
- "forcing auto negotiation\n", ap->name);
- tmp |= LNK_NEGOTIATE | LNK_1000MB |
- LNK_100MB | LNK_10MB;
- }
- if ((option & 0x100) == 0)
- tmp |= LNK_NEG_FCTL;
- else
- printk(KERN_INFO "%s: Disabling flow control "
- "negotiation\n", ap->name);
- if (option & 0x200)
- tmp |= LNK_RX_FLOW_CTL_Y;
- if ((option & 0x400) && (ap->version >= 2)) {
- printk(KERN_INFO "%s: Enabling TX flow control\n",
- ap->name);
- tmp |= LNK_TX_FLOW_CTL_Y;
- }
- }
-
- ap->link = tmp;
- writel(tmp, ®s->TuneLink);
- if (ap->version >= 2)
- writel(tmp, ®s->TuneFastLink);
-
- writel(ap->firmware_start, ®s->Pc);
-
- writel(0, ®s->Mb0Lo);
-
- /*
- * Set tx_csm before we start receiving interrupts, otherwise
- * the interrupt handler might think it is supposed to process
- * tx ints before we are up and running, which may cause a null
- * pointer access in the int handler.
- */
- ap->cur_rx = 0;
- ap->tx_prd = *(ap->tx_csm) = ap->tx_ret_csm = 0;
-
- wmb();
- ace_set_txprd(regs, ap, 0);
- writel(0, ®s->RxRetCsm);
-
- /*
- * Enable DMA engine now.
- * If we do this sooner, Mckinley box pukes.
- * I assume it's because Tigon II DMA engine wants to check
- * *something* even before the CPU is started.
- */
- writel(1, ®s->AssistState); /* enable DMA */
-
- /*
- * Start the NIC CPU
- */
- writel(readl(®s->CpuCtrl) & ~(CPU_HALT|CPU_TRACE), ®s->CpuCtrl);
- readl(®s->CpuCtrl);
-
- /*
- * Wait for the firmware to spin up - max 3 seconds.
- */
- myjif = jiffies + 3 * HZ;
- while (time_before(jiffies, myjif) && !ap->fw_running)
- cpu_relax();
-
- if (!ap->fw_running) {
- printk(KERN_ERR "%s: Firmware NOT running!\n", ap->name);
-
- ace_dump_trace(ap);
- writel(readl(®s->CpuCtrl) | CPU_HALT, ®s->CpuCtrl);
- readl(®s->CpuCtrl);
-
- /* aman@sgi.com - account for badly behaving firmware/NIC:
- * - have observed that the NIC may continue to generate
- * interrupts for some reason; attempt to stop it - halt
- * second CPU for Tigon II cards, and also clear Mb0
- * - if we're a module, we'll fail to load if this was
- * the only GbE card in the system => if the kernel does
- * see an interrupt from the NIC, code to handle it is
- * gone and OOps! - so free_irq also
- */
- if (ap->version >= 2)
- writel(readl(®s->CpuBCtrl) | CPU_HALT,
- ®s->CpuBCtrl);
- writel(0, ®s->Mb0Lo);
- readl(®s->Mb0Lo);
-
- ecode = -EBUSY;
- goto init_error;
- }
-
- /*
- * We load the ring here as there seem to be no way to tell the
- * firmware to wipe the ring without re-initializing it.
- */
- if (!test_and_set_bit(0, &ap->std_refill_busy))
- ace_load_std_rx_ring(dev, RX_RING_SIZE);
- else
- printk(KERN_ERR "%s: Someone is busy refilling the RX ring\n",
- ap->name);
- if (ap->version >= 2) {
- if (!test_and_set_bit(0, &ap->mini_refill_busy))
- ace_load_mini_rx_ring(dev, RX_MINI_SIZE);
- else
- printk(KERN_ERR "%s: Someone is busy refilling "
- "the RX mini ring\n", ap->name);
- }
- return 0;
-
- init_error:
- ace_init_cleanup(dev);
- return ecode;
-}
-
-
-static void ace_set_rxtx_parms(struct net_device *dev, int jumbo)
-{
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
- int board_idx = ap->board_idx;
-
- if (board_idx >= 0) {
- if (!jumbo) {
- if (!tx_coal_tick[board_idx])
- writel(DEF_TX_COAL, ®s->TuneTxCoalTicks);
- if (!max_tx_desc[board_idx])
- writel(DEF_TX_MAX_DESC, ®s->TuneMaxTxDesc);
- if (!rx_coal_tick[board_idx])
- writel(DEF_RX_COAL, ®s->TuneRxCoalTicks);
- if (!max_rx_desc[board_idx])
- writel(DEF_RX_MAX_DESC, ®s->TuneMaxRxDesc);
- if (!tx_ratio[board_idx])
- writel(DEF_TX_RATIO, ®s->TxBufRat);
- } else {
- if (!tx_coal_tick[board_idx])
- writel(DEF_JUMBO_TX_COAL,
- ®s->TuneTxCoalTicks);
- if (!max_tx_desc[board_idx])
- writel(DEF_JUMBO_TX_MAX_DESC,
- ®s->TuneMaxTxDesc);
- if (!rx_coal_tick[board_idx])
- writel(DEF_JUMBO_RX_COAL,
- ®s->TuneRxCoalTicks);
- if (!max_rx_desc[board_idx])
- writel(DEF_JUMBO_RX_MAX_DESC,
- ®s->TuneMaxRxDesc);
- if (!tx_ratio[board_idx])
- writel(DEF_JUMBO_TX_RATIO, ®s->TxBufRat);
- }
- }
-}
-
-
-static void ace_watchdog(struct net_device *data)
-{
- struct net_device *dev = data;
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
-
- /*
- * We haven't received a stats update event for more than 2.5
- * seconds and there is data in the transmit queue, thus we
- * assume the card is stuck.
- */
- if (*ap->tx_csm != ap->tx_ret_csm) {
- printk(KERN_WARNING "%s: Transmitter is stuck, %08x\n",
- dev->name, (unsigned int)readl(®s->HostCtrl));
- /* This can happen due to ieee flow control. */
- } else {
- printk(KERN_DEBUG "%s: BUG... transmitter died. Kicking it.\n",
- dev->name);
-#if 0
- netif_wake_queue(dev);
-#endif
- }
-}
-
-
-static void ace_tasklet(unsigned long arg)
-{
- struct net_device *dev = (struct net_device *) arg;
- struct ace_private *ap = netdev_priv(dev);
- int cur_size;
-
- cur_size = atomic_read(&ap->cur_rx_bufs);
- if ((cur_size < RX_LOW_STD_THRES) &&
- !test_and_set_bit(0, &ap->std_refill_busy)) {
-#ifdef DEBUG
- printk("refilling buffers (current %i)\n", cur_size);
-#endif
- ace_load_std_rx_ring(dev, RX_RING_SIZE - cur_size);
- }
-
- if (ap->version >= 2) {
- cur_size = atomic_read(&ap->cur_mini_bufs);
- if ((cur_size < RX_LOW_MINI_THRES) &&
- !test_and_set_bit(0, &ap->mini_refill_busy)) {
-#ifdef DEBUG
- printk("refilling mini buffers (current %i)\n",
- cur_size);
-#endif
- ace_load_mini_rx_ring(dev, RX_MINI_SIZE - cur_size);
- }
- }
-
- cur_size = atomic_read(&ap->cur_jumbo_bufs);
- if (ap->jumbo && (cur_size < RX_LOW_JUMBO_THRES) &&
- !test_and_set_bit(0, &ap->jumbo_refill_busy)) {
-#ifdef DEBUG
- printk("refilling jumbo buffers (current %i)\n", cur_size);
-#endif
- ace_load_jumbo_rx_ring(dev, RX_JUMBO_SIZE - cur_size);
- }
- ap->tasklet_pending = 0;
-}
-
-
-/*
- * Copy the contents of the NIC's trace buffer to kernel memory.
- */
-static void ace_dump_trace(struct ace_private *ap)
-{
-#if 0
- if (!ap->trace_buf)
- if (!(ap->trace_buf = kmalloc(ACE_TRACE_SIZE, GFP_KERNEL)))
- return;
-#endif
-}
-
-
-/*
- * Load the standard rx ring.
- *
- * Loading rings is safe without holding the spin lock since this is
- * done only before the device is enabled, thus no interrupts are
- * generated and by the interrupt handler/tasklet handler.
- */
-static void ace_load_std_rx_ring(struct net_device *dev, int nr_bufs)
-{
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
- short i, idx;
-
-
- prefetchw(&ap->cur_rx_bufs);
-
- idx = ap->rx_std_skbprd;
-
- for (i = 0; i < nr_bufs; i++) {
- struct sk_buff *skb;
- struct rx_desc *rd;
- dma_addr_t mapping;
-
- skb = netdev_alloc_skb_ip_align(dev, ACE_STD_BUFSIZE);
- if (!skb)
- break;
-
- mapping = pci_map_page(ap->pdev, virt_to_page(skb->data),
- offset_in_page(skb->data),
- ACE_STD_BUFSIZE,
- PCI_DMA_FROMDEVICE);
- ap->skb->rx_std_skbuff[idx].skb = skb;
- dma_unmap_addr_set(&ap->skb->rx_std_skbuff[idx],
- mapping, mapping);
-
- rd = &ap->rx_std_ring[idx];
- set_aceaddr(&rd->addr, mapping);
- rd->size = ACE_STD_BUFSIZE;
- rd->idx = idx;
- idx = (idx + 1) % RX_STD_RING_ENTRIES;
- }
-
- if (!i)
- goto error_out;
-
- atomic_add(i, &ap->cur_rx_bufs);
- ap->rx_std_skbprd = idx;
-
- if (ACE_IS_TIGON_I(ap)) {
- struct cmd cmd;
- cmd.evt = C_SET_RX_PRD_IDX;
- cmd.code = 0;
- cmd.idx = ap->rx_std_skbprd;
- ace_issue_cmd(regs, &cmd);
- } else {
- writel(idx, ®s->RxStdPrd);
- wmb();
- }
-
- out:
- clear_bit(0, &ap->std_refill_busy);
- return;
-
- error_out:
- printk(KERN_INFO "Out of memory when allocating "
- "standard receive buffers\n");
- goto out;
-}
-
-
-static void ace_load_mini_rx_ring(struct net_device *dev, int nr_bufs)
-{
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
- short i, idx;
-
- prefetchw(&ap->cur_mini_bufs);
-
- idx = ap->rx_mini_skbprd;
- for (i = 0; i < nr_bufs; i++) {
- struct sk_buff *skb;
- struct rx_desc *rd;
- dma_addr_t mapping;
-
- skb = netdev_alloc_skb_ip_align(dev, ACE_MINI_BUFSIZE);
- if (!skb)
- break;
-
- mapping = pci_map_page(ap->pdev, virt_to_page(skb->data),
- offset_in_page(skb->data),
- ACE_MINI_BUFSIZE,
- PCI_DMA_FROMDEVICE);
- ap->skb->rx_mini_skbuff[idx].skb = skb;
- dma_unmap_addr_set(&ap->skb->rx_mini_skbuff[idx],
- mapping, mapping);
-
- rd = &ap->rx_mini_ring[idx];
- set_aceaddr(&rd->addr, mapping);
- rd->size = ACE_MINI_BUFSIZE;
- rd->idx = idx;
- idx = (idx + 1) % RX_MINI_RING_ENTRIES;
- }
-
- if (!i)
- goto error_out;
-
- atomic_add(i, &ap->cur_mini_bufs);
-
- ap->rx_mini_skbprd = idx;
-
- writel(idx, ®s->RxMiniPrd);
- wmb();
-
- out:
- clear_bit(0, &ap->mini_refill_busy);
- return;
- error_out:
- printk(KERN_INFO "Out of memory when allocating "
- "mini receive buffers\n");
- goto out;
-}
-
-
-/*
- * Load the jumbo rx ring, this may happen at any time if the MTU
- * is changed to a value > 1500.
- */
-static void ace_load_jumbo_rx_ring(struct net_device *dev, int nr_bufs)
-{
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
- short i, idx;
-
- idx = ap->rx_jumbo_skbprd;
-
- for (i = 0; i < nr_bufs; i++) {
- struct sk_buff *skb;
- struct rx_desc *rd;
- dma_addr_t mapping;
-
- skb = netdev_alloc_skb_ip_align(dev, ACE_JUMBO_BUFSIZE);
- if (!skb)
- break;
-
- mapping = pci_map_page(ap->pdev, virt_to_page(skb->data),
- offset_in_page(skb->data),
- ACE_JUMBO_BUFSIZE,
- PCI_DMA_FROMDEVICE);
- ap->skb->rx_jumbo_skbuff[idx].skb = skb;
- dma_unmap_addr_set(&ap->skb->rx_jumbo_skbuff[idx],
- mapping, mapping);
-
- rd = &ap->rx_jumbo_ring[idx];
- set_aceaddr(&rd->addr, mapping);
- rd->size = ACE_JUMBO_BUFSIZE;
- rd->idx = idx;
- idx = (idx + 1) % RX_JUMBO_RING_ENTRIES;
- }
-
- if (!i)
- goto error_out;
-
- atomic_add(i, &ap->cur_jumbo_bufs);
- ap->rx_jumbo_skbprd = idx;
-
- if (ACE_IS_TIGON_I(ap)) {
- struct cmd cmd;
- cmd.evt = C_SET_RX_JUMBO_PRD_IDX;
- cmd.code = 0;
- cmd.idx = ap->rx_jumbo_skbprd;
- ace_issue_cmd(regs, &cmd);
- } else {
- writel(idx, ®s->RxJumboPrd);
- wmb();
- }
-
- out:
- clear_bit(0, &ap->jumbo_refill_busy);
- return;
- error_out:
- if (net_ratelimit())
- printk(KERN_INFO "Out of memory when allocating "
- "jumbo receive buffers\n");
- goto out;
-}
-
-
-/*
- * All events are considered to be slow (RX/TX ints do not generate
- * events) and are handled here, outside the main interrupt handler,
- * to reduce the size of the handler.
- */
-static u32 ace_handle_event(struct net_device *dev, u32 evtcsm, u32 evtprd)
-{
- struct ace_private *ap;
-
- ap = netdev_priv(dev);
-
- while (evtcsm != evtprd) {
- switch (ap->evt_ring[evtcsm].evt) {
- case E_FW_RUNNING:
- printk(KERN_INFO "%s: Firmware up and running\n",
- ap->name);
- ap->fw_running = 1;
- wmb();
- break;
- case E_STATS_UPDATED:
- break;
- case E_LNK_STATE:
- {
- u16 code = ap->evt_ring[evtcsm].code;
- switch (code) {
- case E_C_LINK_UP:
- {
- u32 state = readl(&ap->regs->GigLnkState);
- printk(KERN_WARNING "%s: Optical link UP "
- "(%s Duplex, Flow Control: %s%s)\n",
- ap->name,
- state & LNK_FULL_DUPLEX ? "Full":"Half",
- state & LNK_TX_FLOW_CTL_Y ? "TX " : "",
- state & LNK_RX_FLOW_CTL_Y ? "RX" : "");
- break;
- }
- case E_C_LINK_DOWN:
- printk(KERN_WARNING "%s: Optical link DOWN\n",
- ap->name);
- break;
- case E_C_LINK_10_100:
- printk(KERN_WARNING "%s: 10/100BaseT link "
- "UP\n", ap->name);
- break;
- default:
- printk(KERN_ERR "%s: Unknown optical link "
- "state %02x\n", ap->name, code);
- }
- break;
- }
- case E_ERROR:
- switch(ap->evt_ring[evtcsm].code) {
- case E_C_ERR_INVAL_CMD:
- printk(KERN_ERR "%s: invalid command error\n",
- ap->name);
- break;
- case E_C_ERR_UNIMP_CMD:
- printk(KERN_ERR "%s: unimplemented command "
- "error\n", ap->name);
- break;
- case E_C_ERR_BAD_CFG:
- printk(KERN_ERR "%s: bad config error\n",
- ap->name);
- break;
- default:
- printk(KERN_ERR "%s: unknown error %02x\n",
- ap->name, ap->evt_ring[evtcsm].code);
- }
- break;
- case E_RESET_JUMBO_RNG:
- {
- int i;
- for (i = 0; i < RX_JUMBO_RING_ENTRIES; i++) {
- if (ap->skb->rx_jumbo_skbuff[i].skb) {
- ap->rx_jumbo_ring[i].size = 0;
- set_aceaddr(&ap->rx_jumbo_ring[i].addr, 0);
- dev_kfree_skb(ap->skb->rx_jumbo_skbuff[i].skb);
- ap->skb->rx_jumbo_skbuff[i].skb = NULL;
- }
- }
-
- if (ACE_IS_TIGON_I(ap)) {
- struct cmd cmd;
- cmd.evt = C_SET_RX_JUMBO_PRD_IDX;
- cmd.code = 0;
- cmd.idx = 0;
- ace_issue_cmd(ap->regs, &cmd);
- } else {
- writel(0, &((ap->regs)->RxJumboPrd));
- wmb();
- }
-
- ap->jumbo = 0;
- ap->rx_jumbo_skbprd = 0;
- printk(KERN_INFO "%s: Jumbo ring flushed\n",
- ap->name);
- clear_bit(0, &ap->jumbo_refill_busy);
- break;
- }
- default:
- printk(KERN_ERR "%s: Unhandled event 0x%02x\n",
- ap->name, ap->evt_ring[evtcsm].evt);
- }
- evtcsm = (evtcsm + 1) % EVT_RING_ENTRIES;
- }
-
- return evtcsm;
-}
-
-
-static void ace_rx_int(struct net_device *dev, u32 rxretprd, u32 rxretcsm)
-{
- struct ace_private *ap = netdev_priv(dev);
- u32 idx;
- int mini_count = 0, std_count = 0;
-
- idx = rxretcsm;
-
- prefetchw(&ap->cur_rx_bufs);
- prefetchw(&ap->cur_mini_bufs);
-
- while (idx != rxretprd) {
- struct ring_info *rip;
- struct sk_buff *skb;
- struct rx_desc *rxdesc, *retdesc;
- u32 skbidx;
- int bd_flags, desc_type, mapsize;
- u16 csum;
-
-
- /* make sure the rx descriptor isn't read before rxretprd */
- if (idx == rxretcsm)
- rmb();
-
- retdesc = &ap->rx_return_ring[idx];
- skbidx = retdesc->idx;
- bd_flags = retdesc->flags;
- desc_type = bd_flags & (BD_FLG_JUMBO | BD_FLG_MINI);
-
- switch(desc_type) {
- /*
- * Normal frames do not have any flags set
- *
- * Mini and normal frames arrive frequently,
- * so use a local counter to avoid doing
- * atomic operations for each packet arriving.
- */
- case 0:
- rip = &ap->skb->rx_std_skbuff[skbidx];
- mapsize = ACE_STD_BUFSIZE;
- rxdesc = &ap->rx_std_ring[skbidx];
- std_count++;
- break;
- case BD_FLG_JUMBO:
- rip = &ap->skb->rx_jumbo_skbuff[skbidx];
- mapsize = ACE_JUMBO_BUFSIZE;
- rxdesc = &ap->rx_jumbo_ring[skbidx];
- atomic_dec(&ap->cur_jumbo_bufs);
- break;
- case BD_FLG_MINI:
- rip = &ap->skb->rx_mini_skbuff[skbidx];
- mapsize = ACE_MINI_BUFSIZE;
- rxdesc = &ap->rx_mini_ring[skbidx];
- mini_count++;
- break;
- default:
- printk(KERN_INFO "%s: unknown frame type (0x%02x) "
- "returned by NIC\n", dev->name,
- retdesc->flags);
- goto error;
- }
-
- skb = rip->skb;
- rip->skb = NULL;
- pci_unmap_page(ap->pdev,
- dma_unmap_addr(rip, mapping),
- mapsize,
- PCI_DMA_FROMDEVICE);
- skb_put(skb, retdesc->size);
-
- /*
- * Fly baby, fly!
- */
- csum = retdesc->tcp_udp_csum;
-
- skb->protocol = eth_type_trans(skb, dev);
-
- /*
- * Instead of forcing the poor tigon mips cpu to calculate
- * pseudo hdr checksum, we do this ourselves.
- */
- if (bd_flags & BD_FLG_TCP_UDP_SUM) {
- skb->csum = htons(csum);
- skb->ip_summed = CHECKSUM_COMPLETE;
- } else {
- skb_checksum_none_assert(skb);
- }
-
- /* send it up */
- if ((bd_flags & BD_FLG_VLAN_TAG))
- __vlan_hwaccel_put_tag(skb, retdesc->vlan);
- netif_rx(skb);
-
- dev->stats.rx_packets++;
- dev->stats.rx_bytes += retdesc->size;
-
- idx = (idx + 1) % RX_RETURN_RING_ENTRIES;
- }
-
- atomic_sub(std_count, &ap->cur_rx_bufs);
- if (!ACE_IS_TIGON_I(ap))
- atomic_sub(mini_count, &ap->cur_mini_bufs);
-
- out:
- /*
- * According to the documentation RxRetCsm is obsolete with
- * the 12.3.x Firmware - my Tigon I NICs seem to disagree!
- */
- if (ACE_IS_TIGON_I(ap)) {
- writel(idx, &ap->regs->RxRetCsm);
- }
- ap->cur_rx = idx;
-
- return;
- error:
- idx = rxretprd;
- goto out;
-}
-
-
-static inline void ace_tx_int(struct net_device *dev,
- u32 txcsm, u32 idx)
-{
- struct ace_private *ap = netdev_priv(dev);
-
- do {
- struct sk_buff *skb;
- struct tx_ring_info *info;
-
- info = ap->skb->tx_skbuff + idx;
- skb = info->skb;
-
- if (dma_unmap_len(info, maplen)) {
- pci_unmap_page(ap->pdev, dma_unmap_addr(info, mapping),
- dma_unmap_len(info, maplen),
- PCI_DMA_TODEVICE);
- dma_unmap_len_set(info, maplen, 0);
- }
-
- if (skb) {
- dev->stats.tx_packets++;
- dev->stats.tx_bytes += skb->len;
- dev_kfree_skb_irq(skb);
- info->skb = NULL;
- }
-
- idx = (idx + 1) % ACE_TX_RING_ENTRIES(ap);
- } while (idx != txcsm);
-
- if (netif_queue_stopped(dev))
- netif_wake_queue(dev);
-
- wmb();
- ap->tx_ret_csm = txcsm;
-
- /* So... tx_ret_csm is advanced _after_ check for device wakeup.
- *
- * We could try to make it before. In this case we would get
- * the following race condition: hard_start_xmit on other cpu
- * enters after we advanced tx_ret_csm and fills space,
- * which we have just freed, so that we make illegal device wakeup.
- * There is no good way to workaround this (at entry
- * to ace_start_xmit detects this condition and prevents
- * ring corruption, but it is not a good workaround.)
- *
- * When tx_ret_csm is advanced after, we wake up device _only_
- * if we really have some space in ring (though the core doing
- * hard_start_xmit can see full ring for some period and has to
- * synchronize.) Superb.
- * BUT! We get another subtle race condition. hard_start_xmit
- * may think that ring is full between wakeup and advancing
- * tx_ret_csm and will stop device instantly! It is not so bad.
- * We are guaranteed that there is something in ring, so that
- * the next irq will resume transmission. To speedup this we could
- * mark descriptor, which closes ring with BD_FLG_COAL_NOW
- * (see ace_start_xmit).
- *
- * Well, this dilemma exists in all lock-free devices.
- * We, following scheme used in drivers by Donald Becker,
- * select the least dangerous.
- * --ANK
- */
-}
-
-
-static irqreturn_t ace_interrupt(int irq, void *dev_id)
-{
- struct net_device *dev = (struct net_device *)dev_id;
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
- u32 idx;
- u32 txcsm, rxretcsm, rxretprd;
- u32 evtcsm, evtprd;
-
- /*
- * In case of PCI shared interrupts or spurious interrupts,
- * we want to make sure it is actually our interrupt before
- * spending any time in here.
- */
- if (!(readl(®s->HostCtrl) & IN_INT))
- return IRQ_NONE;
-
- /*
- * ACK intr now. Otherwise we will lose updates to rx_ret_prd,
- * which happened _after_ rxretprd = *ap->rx_ret_prd; but before
- * writel(0, ®s->Mb0Lo).
- *
- * "IRQ avoidance" recommended in docs applies to IRQs served
- * threads and it is wrong even for that case.
- */
- writel(0, ®s->Mb0Lo);
- readl(®s->Mb0Lo);
-
- /*
- * There is no conflict between transmit handling in
- * start_xmit and receive processing, thus there is no reason
- * to take a spin lock for RX handling. Wait until we start
- * working on the other stuff - hey we don't need a spin lock
- * anymore.
- */
- rxretprd = *ap->rx_ret_prd;
- rxretcsm = ap->cur_rx;
-
- if (rxretprd != rxretcsm)
- ace_rx_int(dev, rxretprd, rxretcsm);
-
- txcsm = *ap->tx_csm;
- idx = ap->tx_ret_csm;
-
- if (txcsm != idx) {
- /*
- * If each skb takes only one descriptor this check degenerates
- * to identity, because new space has just been opened.
- * But if skbs are fragmented we must check that this index
- * update releases enough of space, otherwise we just
- * wait for device to make more work.
- */
- if (!tx_ring_full(ap, txcsm, ap->tx_prd))
- ace_tx_int(dev, txcsm, idx);
- }
-
- evtcsm = readl(®s->EvtCsm);
- evtprd = *ap->evt_prd;
-
- if (evtcsm != evtprd) {
- evtcsm = ace_handle_event(dev, evtcsm, evtprd);
- writel(evtcsm, ®s->EvtCsm);
- }
-
- /*
- * This has to go last in the interrupt handler and run with
- * the spin lock released ... what lock?
- */
- if (netif_running(dev)) {
- int cur_size;
- int run_tasklet = 0;
-
- cur_size = atomic_read(&ap->cur_rx_bufs);
- if (cur_size < RX_LOW_STD_THRES) {
- if ((cur_size < RX_PANIC_STD_THRES) &&
- !test_and_set_bit(0, &ap->std_refill_busy)) {
-#ifdef DEBUG
- printk("low on std buffers %i\n", cur_size);
-#endif
- ace_load_std_rx_ring(dev,
- RX_RING_SIZE - cur_size);
- } else
- run_tasklet = 1;
- }
-
- if (!ACE_IS_TIGON_I(ap)) {
- cur_size = atomic_read(&ap->cur_mini_bufs);
- if (cur_size < RX_LOW_MINI_THRES) {
- if ((cur_size < RX_PANIC_MINI_THRES) &&
- !test_and_set_bit(0,
- &ap->mini_refill_busy)) {
-#ifdef DEBUG
- printk("low on mini buffers %i\n",
- cur_size);
-#endif
- ace_load_mini_rx_ring(dev,
- RX_MINI_SIZE - cur_size);
- } else
- run_tasklet = 1;
- }
- }
-
- if (ap->jumbo) {
- cur_size = atomic_read(&ap->cur_jumbo_bufs);
- if (cur_size < RX_LOW_JUMBO_THRES) {
- if ((cur_size < RX_PANIC_JUMBO_THRES) &&
- !test_and_set_bit(0,
- &ap->jumbo_refill_busy)){
-#ifdef DEBUG
- printk("low on jumbo buffers %i\n",
- cur_size);
-#endif
- ace_load_jumbo_rx_ring(dev,
- RX_JUMBO_SIZE - cur_size);
- } else
- run_tasklet = 1;
- }
- }
- if (run_tasklet && !ap->tasklet_pending) {
- ap->tasklet_pending = 1;
- tasklet_schedule(&ap->ace_tasklet);
- }
- }
-
- return IRQ_HANDLED;
-}
-
-static int ace_open(struct net_device *dev)
-{
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
- struct cmd cmd;
-
- if (!(ap->fw_running)) {
- printk(KERN_WARNING "%s: Firmware not running!\n", dev->name);
- return -EBUSY;
- }
-
- writel(dev->mtu + ETH_HLEN + 4, ®s->IfMtu);
-
- cmd.evt = C_CLEAR_STATS;
- cmd.code = 0;
- cmd.idx = 0;
- ace_issue_cmd(regs, &cmd);
-
- cmd.evt = C_HOST_STATE;
- cmd.code = C_C_STACK_UP;
- cmd.idx = 0;
- ace_issue_cmd(regs, &cmd);
-
- if (ap->jumbo &&
- !test_and_set_bit(0, &ap->jumbo_refill_busy))
- ace_load_jumbo_rx_ring(dev, RX_JUMBO_SIZE);
-
- if (dev->flags & IFF_PROMISC) {
- cmd.evt = C_SET_PROMISC_MODE;
- cmd.code = C_C_PROMISC_ENABLE;
- cmd.idx = 0;
- ace_issue_cmd(regs, &cmd);
-
- ap->promisc = 1;
- }else
- ap->promisc = 0;
- ap->mcast_all = 0;
-
-#if 0
- cmd.evt = C_LNK_NEGOTIATION;
- cmd.code = 0;
- cmd.idx = 0;
- ace_issue_cmd(regs, &cmd);
-#endif
-
- netif_start_queue(dev);
-
- /*
- * Setup the bottom half rx ring refill handler
- */
- tasklet_init(&ap->ace_tasklet, ace_tasklet, (unsigned long)dev);
- return 0;
-}
-
-
-static int ace_close(struct net_device *dev)
-{
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
- struct cmd cmd;
- unsigned long flags;
- short i;
-
- /*
- * Without (or before) releasing irq and stopping hardware, this
- * is an absolute non-sense, by the way. It will be reset instantly
- * by the first irq.
- */
- netif_stop_queue(dev);
-
-
- if (ap->promisc) {
- cmd.evt = C_SET_PROMISC_MODE;
- cmd.code = C_C_PROMISC_DISABLE;
- cmd.idx = 0;
- ace_issue_cmd(regs, &cmd);
- ap->promisc = 0;
- }
-
- cmd.evt = C_HOST_STATE;
- cmd.code = C_C_STACK_DOWN;
- cmd.idx = 0;
- ace_issue_cmd(regs, &cmd);
-
- tasklet_kill(&ap->ace_tasklet);
-
- /*
- * Make sure one CPU is not processing packets while
- * buffers are being released by another.
- */
-
- local_irq_save(flags);
- ace_mask_irq(dev);
-
- for (i = 0; i < ACE_TX_RING_ENTRIES(ap); i++) {
- struct sk_buff *skb;
- struct tx_ring_info *info;
-
- info = ap->skb->tx_skbuff + i;
- skb = info->skb;
-
- if (dma_unmap_len(info, maplen)) {
- if (ACE_IS_TIGON_I(ap)) {
- /* NB: TIGON_1 is special, tx_ring is in io space */
- struct tx_desc __iomem *tx;
- tx = (__force struct tx_desc __iomem *) &ap->tx_ring[i];
- writel(0, &tx->addr.addrhi);
- writel(0, &tx->addr.addrlo);
- writel(0, &tx->flagsize);
- } else
- memset(ap->tx_ring + i, 0,
- sizeof(struct tx_desc));
- pci_unmap_page(ap->pdev, dma_unmap_addr(info, mapping),
- dma_unmap_len(info, maplen),
- PCI_DMA_TODEVICE);
- dma_unmap_len_set(info, maplen, 0);
- }
- if (skb) {
- dev_kfree_skb(skb);
- info->skb = NULL;
- }
- }
-
- if (ap->jumbo) {
- cmd.evt = C_RESET_JUMBO_RNG;
- cmd.code = 0;
- cmd.idx = 0;
- ace_issue_cmd(regs, &cmd);
- }
-
- ace_unmask_irq(dev);
- local_irq_restore(flags);
-
- return 0;
-}
-
-
-static inline dma_addr_t
-ace_map_tx_skb(struct ace_private *ap, struct sk_buff *skb,
- struct sk_buff *tail, u32 idx)
-{
- dma_addr_t mapping;
- struct tx_ring_info *info;
-
- mapping = pci_map_page(ap->pdev, virt_to_page(skb->data),
- offset_in_page(skb->data),
- skb->len, PCI_DMA_TODEVICE);
-
- info = ap->skb->tx_skbuff + idx;
- info->skb = tail;
- dma_unmap_addr_set(info, mapping, mapping);
- dma_unmap_len_set(info, maplen, skb->len);
- return mapping;
-}
-
-
-static inline void
-ace_load_tx_bd(struct ace_private *ap, struct tx_desc *desc, u64 addr,
- u32 flagsize, u32 vlan_tag)
-{
-#if !USE_TX_COAL_NOW
- flagsize &= ~BD_FLG_COAL_NOW;
-#endif
-
- if (ACE_IS_TIGON_I(ap)) {
- struct tx_desc __iomem *io = (__force struct tx_desc __iomem *) desc;
- writel(addr >> 32, &io->addr.addrhi);
- writel(addr & 0xffffffff, &io->addr.addrlo);
- writel(flagsize, &io->flagsize);
- writel(vlan_tag, &io->vlanres);
- } else {
- desc->addr.addrhi = addr >> 32;
- desc->addr.addrlo = addr;
- desc->flagsize = flagsize;
- desc->vlanres = vlan_tag;
- }
-}
-
-
-static netdev_tx_t ace_start_xmit(struct sk_buff *skb,
- struct net_device *dev)
-{
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
- struct tx_desc *desc;
- u32 idx, flagsize;
- unsigned long maxjiff = jiffies + 3*HZ;
-
-restart:
- idx = ap->tx_prd;
-
- if (tx_ring_full(ap, ap->tx_ret_csm, idx))
- goto overflow;
-
- if (!skb_shinfo(skb)->nr_frags) {
- dma_addr_t mapping;
- u32 vlan_tag = 0;
-
- mapping = ace_map_tx_skb(ap, skb, skb, idx);
- flagsize = (skb->len << 16) | (BD_FLG_END);
- if (skb->ip_summed == CHECKSUM_PARTIAL)
- flagsize |= BD_FLG_TCP_UDP_SUM;
- if (vlan_tx_tag_present(skb)) {
- flagsize |= BD_FLG_VLAN_TAG;
- vlan_tag = vlan_tx_tag_get(skb);
- }
- desc = ap->tx_ring + idx;
- idx = (idx + 1) % ACE_TX_RING_ENTRIES(ap);
-
- /* Look at ace_tx_int for explanations. */
- if (tx_ring_full(ap, ap->tx_ret_csm, idx))
- flagsize |= BD_FLG_COAL_NOW;
-
- ace_load_tx_bd(ap, desc, mapping, flagsize, vlan_tag);
- } else {
- dma_addr_t mapping;
- u32 vlan_tag = 0;
- int i, len = 0;
-
- mapping = ace_map_tx_skb(ap, skb, NULL, idx);
- flagsize = (skb_headlen(skb) << 16);
- if (skb->ip_summed == CHECKSUM_PARTIAL)
- flagsize |= BD_FLG_TCP_UDP_SUM;
- if (vlan_tx_tag_present(skb)) {
- flagsize |= BD_FLG_VLAN_TAG;
- vlan_tag = vlan_tx_tag_get(skb);
- }
-
- ace_load_tx_bd(ap, ap->tx_ring + idx, mapping, flagsize, vlan_tag);
-
- idx = (idx + 1) % ACE_TX_RING_ENTRIES(ap);
-
- for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
- skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- struct tx_ring_info *info;
-
- len += frag->size;
- info = ap->skb->tx_skbuff + idx;
- desc = ap->tx_ring + idx;
-
- mapping = pci_map_page(ap->pdev, frag->page,
- frag->page_offset, frag->size,
- PCI_DMA_TODEVICE);
-
- flagsize = (frag->size << 16);
- if (skb->ip_summed == CHECKSUM_PARTIAL)
- flagsize |= BD_FLG_TCP_UDP_SUM;
- idx = (idx + 1) % ACE_TX_RING_ENTRIES(ap);
-
- if (i == skb_shinfo(skb)->nr_frags - 1) {
- flagsize |= BD_FLG_END;
- if (tx_ring_full(ap, ap->tx_ret_csm, idx))
- flagsize |= BD_FLG_COAL_NOW;
-
- /*
- * Only the last fragment frees
- * the skb!
- */
- info->skb = skb;
- } else {
- info->skb = NULL;
- }
- dma_unmap_addr_set(info, mapping, mapping);
- dma_unmap_len_set(info, maplen, frag->size);
- ace_load_tx_bd(ap, desc, mapping, flagsize, vlan_tag);
- }
- }
-
- wmb();
- ap->tx_prd = idx;
- ace_set_txprd(regs, ap, idx);
-
- if (flagsize & BD_FLG_COAL_NOW) {
- netif_stop_queue(dev);
-
- /*
- * A TX-descriptor producer (an IRQ) might have gotten
- * between, making the ring free again. Since xmit is
- * serialized, this is the only situation we have to
- * re-test.
- */
- if (!tx_ring_full(ap, ap->tx_ret_csm, idx))
- netif_wake_queue(dev);
- }
-
- return NETDEV_TX_OK;
-
-overflow:
- /*
- * This race condition is unavoidable with lock-free drivers.
- * We wake up the queue _before_ tx_prd is advanced, so that we can
- * enter hard_start_xmit too early, while tx ring still looks closed.
- * This happens ~1-4 times per 100000 packets, so that we can allow
- * to loop syncing to other CPU. Probably, we need an additional
- * wmb() in ace_tx_intr as well.
- *
- * Note that this race is relieved by reserving one more entry
- * in tx ring than it is necessary (see original non-SG driver).
- * However, with SG we need to reserve 2*MAX_SKB_FRAGS+1, which
- * is already overkill.
- *
- * Alternative is to return with 1 not throttling queue. In this
- * case loop becomes longer, no more useful effects.
- */
- if (time_before(jiffies, maxjiff)) {
- barrier();
- cpu_relax();
- goto restart;
- }
-
- /* The ring is stuck full. */
- printk(KERN_WARNING "%s: Transmit ring stuck full\n", dev->name);
- return NETDEV_TX_BUSY;
-}
-
-
-static int ace_change_mtu(struct net_device *dev, int new_mtu)
-{
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
-
- if (new_mtu > ACE_JUMBO_MTU)
- return -EINVAL;
-
- writel(new_mtu + ETH_HLEN + 4, ®s->IfMtu);
- dev->mtu = new_mtu;
-
- if (new_mtu > ACE_STD_MTU) {
- if (!(ap->jumbo)) {
- printk(KERN_INFO "%s: Enabling Jumbo frame "
- "support\n", dev->name);
- ap->jumbo = 1;
- if (!test_and_set_bit(0, &ap->jumbo_refill_busy))
- ace_load_jumbo_rx_ring(dev, RX_JUMBO_SIZE);
- ace_set_rxtx_parms(dev, 1);
- }
- } else {
- while (test_and_set_bit(0, &ap->jumbo_refill_busy));
- ace_sync_irq(dev->irq);
- ace_set_rxtx_parms(dev, 0);
- if (ap->jumbo) {
- struct cmd cmd;
-
- cmd.evt = C_RESET_JUMBO_RNG;
- cmd.code = 0;
- cmd.idx = 0;
- ace_issue_cmd(regs, &cmd);
- }
- }
-
- return 0;
-}
-
-static int ace_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
-{
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
- u32 link;
-
- memset(ecmd, 0, sizeof(struct ethtool_cmd));
- ecmd->supported =
- (SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full |
- SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full |
- SUPPORTED_1000baseT_Half | SUPPORTED_1000baseT_Full |
- SUPPORTED_Autoneg | SUPPORTED_FIBRE);
-
- ecmd->port = PORT_FIBRE;
- ecmd->transceiver = XCVR_INTERNAL;
-
- link = readl(®s->GigLnkState);
- if (link & LNK_1000MB)
- ethtool_cmd_speed_set(ecmd, SPEED_1000);
- else {
- link = readl(®s->FastLnkState);
- if (link & LNK_100MB)
- ethtool_cmd_speed_set(ecmd, SPEED_100);
- else if (link & LNK_10MB)
- ethtool_cmd_speed_set(ecmd, SPEED_10);
- else
- ethtool_cmd_speed_set(ecmd, 0);
- }
- if (link & LNK_FULL_DUPLEX)
- ecmd->duplex = DUPLEX_FULL;
- else
- ecmd->duplex = DUPLEX_HALF;
-
- if (link & LNK_NEGOTIATE)
- ecmd->autoneg = AUTONEG_ENABLE;
- else
- ecmd->autoneg = AUTONEG_DISABLE;
-
-#if 0
- /*
- * Current struct ethtool_cmd is insufficient
- */
- ecmd->trace = readl(®s->TuneTrace);
-
- ecmd->txcoal = readl(®s->TuneTxCoalTicks);
- ecmd->rxcoal = readl(®s->TuneRxCoalTicks);
-#endif
- ecmd->maxtxpkt = readl(®s->TuneMaxTxDesc);
- ecmd->maxrxpkt = readl(®s->TuneMaxRxDesc);
-
- return 0;
-}
-
-static int ace_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
-{
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
- u32 link, speed;
-
- link = readl(®s->GigLnkState);
- if (link & LNK_1000MB)
- speed = SPEED_1000;
- else {
- link = readl(®s->FastLnkState);
- if (link & LNK_100MB)
- speed = SPEED_100;
- else if (link & LNK_10MB)
- speed = SPEED_10;
- else
- speed = SPEED_100;
- }
-
- link = LNK_ENABLE | LNK_1000MB | LNK_100MB | LNK_10MB |
- LNK_RX_FLOW_CTL_Y | LNK_NEG_FCTL;
- if (!ACE_IS_TIGON_I(ap))
- link |= LNK_TX_FLOW_CTL_Y;
- if (ecmd->autoneg == AUTONEG_ENABLE)
- link |= LNK_NEGOTIATE;
- if (ethtool_cmd_speed(ecmd) != speed) {
- link &= ~(LNK_1000MB | LNK_100MB | LNK_10MB);
- switch (ethtool_cmd_speed(ecmd)) {
- case SPEED_1000:
- link |= LNK_1000MB;
- break;
- case SPEED_100:
- link |= LNK_100MB;
- break;
- case SPEED_10:
- link |= LNK_10MB;
- break;
- }
- }
-
- if (ecmd->duplex == DUPLEX_FULL)
- link |= LNK_FULL_DUPLEX;
-
- if (link != ap->link) {
- struct cmd cmd;
- printk(KERN_INFO "%s: Renegotiating link state\n",
- dev->name);
-
- ap->link = link;
- writel(link, ®s->TuneLink);
- if (!ACE_IS_TIGON_I(ap))
- writel(link, ®s->TuneFastLink);
- wmb();
-
- cmd.evt = C_LNK_NEGOTIATION;
- cmd.code = 0;
- cmd.idx = 0;
- ace_issue_cmd(regs, &cmd);
- }
- return 0;
-}
-
-static void ace_get_drvinfo(struct net_device *dev,
- struct ethtool_drvinfo *info)
-{
- struct ace_private *ap = netdev_priv(dev);
-
- strlcpy(info->driver, "acenic", sizeof(info->driver));
- snprintf(info->version, sizeof(info->version), "%i.%i.%i",
- ap->firmware_major, ap->firmware_minor,
- ap->firmware_fix);
-
- if (ap->pdev)
- strlcpy(info->bus_info, pci_name(ap->pdev),
- sizeof(info->bus_info));
-
-}
-
-/*
- * Set the hardware MAC address.
- */
-static int ace_set_mac_addr(struct net_device *dev, void *p)
-{
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
- struct sockaddr *addr=p;
- u8 *da;
- struct cmd cmd;
-
- if(netif_running(dev))
- return -EBUSY;
-
- memcpy(dev->dev_addr, addr->sa_data,dev->addr_len);
-
- da = (u8 *)dev->dev_addr;
-
- writel(da[0] << 8 | da[1], ®s->MacAddrHi);
- writel((da[2] << 24) | (da[3] << 16) | (da[4] << 8) | da[5],
- ®s->MacAddrLo);
-
- cmd.evt = C_SET_MAC_ADDR;
- cmd.code = 0;
- cmd.idx = 0;
- ace_issue_cmd(regs, &cmd);
-
- return 0;
-}
-
-
-static void ace_set_multicast_list(struct net_device *dev)
-{
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
- struct cmd cmd;
-
- if ((dev->flags & IFF_ALLMULTI) && !(ap->mcast_all)) {
- cmd.evt = C_SET_MULTICAST_MODE;
- cmd.code = C_C_MCAST_ENABLE;
- cmd.idx = 0;
- ace_issue_cmd(regs, &cmd);
- ap->mcast_all = 1;
- } else if (ap->mcast_all) {
- cmd.evt = C_SET_MULTICAST_MODE;
- cmd.code = C_C_MCAST_DISABLE;
- cmd.idx = 0;
- ace_issue_cmd(regs, &cmd);
- ap->mcast_all = 0;
- }
-
- if ((dev->flags & IFF_PROMISC) && !(ap->promisc)) {
- cmd.evt = C_SET_PROMISC_MODE;
- cmd.code = C_C_PROMISC_ENABLE;
- cmd.idx = 0;
- ace_issue_cmd(regs, &cmd);
- ap->promisc = 1;
- }else if (!(dev->flags & IFF_PROMISC) && (ap->promisc)) {
- cmd.evt = C_SET_PROMISC_MODE;
- cmd.code = C_C_PROMISC_DISABLE;
- cmd.idx = 0;
- ace_issue_cmd(regs, &cmd);
- ap->promisc = 0;
- }
-
- /*
- * For the time being multicast relies on the upper layers
- * filtering it properly. The Firmware does not allow one to
- * set the entire multicast list at a time and keeping track of
- * it here is going to be messy.
- */
- if (!netdev_mc_empty(dev) && !ap->mcast_all) {
- cmd.evt = C_SET_MULTICAST_MODE;
- cmd.code = C_C_MCAST_ENABLE;
- cmd.idx = 0;
- ace_issue_cmd(regs, &cmd);
- }else if (!ap->mcast_all) {
- cmd.evt = C_SET_MULTICAST_MODE;
- cmd.code = C_C_MCAST_DISABLE;
- cmd.idx = 0;
- ace_issue_cmd(regs, &cmd);
- }
-}
-
-
-static struct net_device_stats *ace_get_stats(struct net_device *dev)
-{
- struct ace_private *ap = netdev_priv(dev);
- struct ace_mac_stats __iomem *mac_stats =
- (struct ace_mac_stats __iomem *)ap->regs->Stats;
-
- dev->stats.rx_missed_errors = readl(&mac_stats->drop_space);
- dev->stats.multicast = readl(&mac_stats->kept_mc);
- dev->stats.collisions = readl(&mac_stats->coll);
-
- return &dev->stats;
-}
-
-
-static void __devinit ace_copy(struct ace_regs __iomem *regs, const __be32 *src,
- u32 dest, int size)
-{
- void __iomem *tdest;
- short tsize, i;
-
- if (size <= 0)
- return;
-
- while (size > 0) {
- tsize = min_t(u32, ((~dest & (ACE_WINDOW_SIZE - 1)) + 1),
- min_t(u32, size, ACE_WINDOW_SIZE));
- tdest = (void __iomem *) ®s->Window +
- (dest & (ACE_WINDOW_SIZE - 1));
- writel(dest & ~(ACE_WINDOW_SIZE - 1), ®s->WinBase);
- for (i = 0; i < (tsize / 4); i++) {
- /* Firmware is big-endian */
- writel(be32_to_cpup(src), tdest);
- src++;
- tdest += 4;
- dest += 4;
- size -= 4;
- }
- }
-}
-
-
-static void __devinit ace_clear(struct ace_regs __iomem *regs, u32 dest, int size)
-{
- void __iomem *tdest;
- short tsize = 0, i;
-
- if (size <= 0)
- return;
-
- while (size > 0) {
- tsize = min_t(u32, ((~dest & (ACE_WINDOW_SIZE - 1)) + 1),
- min_t(u32, size, ACE_WINDOW_SIZE));
- tdest = (void __iomem *) ®s->Window +
- (dest & (ACE_WINDOW_SIZE - 1));
- writel(dest & ~(ACE_WINDOW_SIZE - 1), ®s->WinBase);
-
- for (i = 0; i < (tsize / 4); i++) {
- writel(0, tdest + i*4);
- }
-
- dest += tsize;
- size -= tsize;
- }
-}
-
-
-/*
- * Download the firmware into the SRAM on the NIC
- *
- * This operation requires the NIC to be halted and is performed with
- * interrupts disabled and with the spinlock hold.
- */
-static int __devinit ace_load_firmware(struct net_device *dev)
-{
- const struct firmware *fw;
- const char *fw_name = "acenic/tg2.bin";
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
- const __be32 *fw_data;
- u32 load_addr;
- int ret;
-
- if (!(readl(®s->CpuCtrl) & CPU_HALTED)) {
- printk(KERN_ERR "%s: trying to download firmware while the "
- "CPU is running!\n", ap->name);
- return -EFAULT;
- }
-
- if (ACE_IS_TIGON_I(ap))
- fw_name = "acenic/tg1.bin";
-
- ret = request_firmware(&fw, fw_name, &ap->pdev->dev);
- if (ret) {
- printk(KERN_ERR "%s: Failed to load firmware \"%s\"\n",
- ap->name, fw_name);
- return ret;
- }
-
- fw_data = (void *)fw->data;
-
- /* Firmware blob starts with version numbers, followed by
- load and start address. Remainder is the blob to be loaded
- contiguously from load address. We don't bother to represent
- the BSS/SBSS sections any more, since we were clearing the
- whole thing anyway. */
- ap->firmware_major = fw->data[0];
- ap->firmware_minor = fw->data[1];
- ap->firmware_fix = fw->data[2];
-
- ap->firmware_start = be32_to_cpu(fw_data[1]);
- if (ap->firmware_start < 0x4000 || ap->firmware_start >= 0x80000) {
- printk(KERN_ERR "%s: bogus load address %08x in \"%s\"\n",
- ap->name, ap->firmware_start, fw_name);
- ret = -EINVAL;
- goto out;
- }
-
- load_addr = be32_to_cpu(fw_data[2]);
- if (load_addr < 0x4000 || load_addr >= 0x80000) {
- printk(KERN_ERR "%s: bogus load address %08x in \"%s\"\n",
- ap->name, load_addr, fw_name);
- ret = -EINVAL;
- goto out;
- }
-
- /*
- * Do not try to clear more than 512KiB or we end up seeing
- * funny things on NICs with only 512KiB SRAM
- */
- ace_clear(regs, 0x2000, 0x80000-0x2000);
- ace_copy(regs, &fw_data[3], load_addr, fw->size-12);
- out:
- release_firmware(fw);
- return ret;
-}
-
-
-/*
- * The eeprom on the AceNIC is an Atmel i2c EEPROM.
- *
- * Accessing the EEPROM is `interesting' to say the least - don't read
- * this code right after dinner.
- *
- * This is all about black magic and bit-banging the device .... I
- * wonder in what hospital they have put the guy who designed the i2c
- * specs.
- *
- * Oh yes, this is only the beginning!
- *
- * Thanks to Stevarino Webinski for helping tracking down the bugs in the
- * code i2c readout code by beta testing all my hacks.
- */
-static void __devinit eeprom_start(struct ace_regs __iomem *regs)
-{
- u32 local;
-
- readl(®s->LocalCtrl);
- udelay(ACE_SHORT_DELAY);
- local = readl(®s->LocalCtrl);
- local |= EEPROM_DATA_OUT | EEPROM_WRITE_ENABLE;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
- udelay(ACE_SHORT_DELAY);
- local |= EEPROM_CLK_OUT;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
- udelay(ACE_SHORT_DELAY);
- local &= ~EEPROM_DATA_OUT;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
- udelay(ACE_SHORT_DELAY);
- local &= ~EEPROM_CLK_OUT;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
-}
-
-
-static void __devinit eeprom_prep(struct ace_regs __iomem *regs, u8 magic)
-{
- short i;
- u32 local;
-
- udelay(ACE_SHORT_DELAY);
- local = readl(®s->LocalCtrl);
- local &= ~EEPROM_DATA_OUT;
- local |= EEPROM_WRITE_ENABLE;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
-
- for (i = 0; i < 8; i++, magic <<= 1) {
- udelay(ACE_SHORT_DELAY);
- if (magic & 0x80)
- local |= EEPROM_DATA_OUT;
- else
- local &= ~EEPROM_DATA_OUT;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
-
- udelay(ACE_SHORT_DELAY);
- local |= EEPROM_CLK_OUT;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
- udelay(ACE_SHORT_DELAY);
- local &= ~(EEPROM_CLK_OUT | EEPROM_DATA_OUT);
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
- }
-}
-
-
-static int __devinit eeprom_check_ack(struct ace_regs __iomem *regs)
-{
- int state;
- u32 local;
-
- local = readl(®s->LocalCtrl);
- local &= ~EEPROM_WRITE_ENABLE;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
- udelay(ACE_LONG_DELAY);
- local |= EEPROM_CLK_OUT;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
- udelay(ACE_SHORT_DELAY);
- /* sample data in middle of high clk */
- state = (readl(®s->LocalCtrl) & EEPROM_DATA_IN) != 0;
- udelay(ACE_SHORT_DELAY);
- mb();
- writel(readl(®s->LocalCtrl) & ~EEPROM_CLK_OUT, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
-
- return state;
-}
-
-
-static void __devinit eeprom_stop(struct ace_regs __iomem *regs)
-{
- u32 local;
-
- udelay(ACE_SHORT_DELAY);
- local = readl(®s->LocalCtrl);
- local |= EEPROM_WRITE_ENABLE;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
- udelay(ACE_SHORT_DELAY);
- local &= ~EEPROM_DATA_OUT;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
- udelay(ACE_SHORT_DELAY);
- local |= EEPROM_CLK_OUT;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
- udelay(ACE_SHORT_DELAY);
- local |= EEPROM_DATA_OUT;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
- udelay(ACE_LONG_DELAY);
- local &= ~EEPROM_CLK_OUT;
- writel(local, ®s->LocalCtrl);
- mb();
-}
-
-
-/*
- * Read a whole byte from the EEPROM.
- */
-static int __devinit read_eeprom_byte(struct net_device *dev,
- unsigned long offset)
-{
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
- unsigned long flags;
- u32 local;
- int result = 0;
- short i;
-
- /*
- * Don't take interrupts on this CPU will bit banging
- * the %#%#@$ I2C device
- */
- local_irq_save(flags);
-
- eeprom_start(regs);
-
- eeprom_prep(regs, EEPROM_WRITE_SELECT);
- if (eeprom_check_ack(regs)) {
- local_irq_restore(flags);
- printk(KERN_ERR "%s: Unable to sync eeprom\n", ap->name);
- result = -EIO;
- goto eeprom_read_error;
- }
-
- eeprom_prep(regs, (offset >> 8) & 0xff);
- if (eeprom_check_ack(regs)) {
- local_irq_restore(flags);
- printk(KERN_ERR "%s: Unable to set address byte 0\n",
- ap->name);
- result = -EIO;
- goto eeprom_read_error;
- }
-
- eeprom_prep(regs, offset & 0xff);
- if (eeprom_check_ack(regs)) {
- local_irq_restore(flags);
- printk(KERN_ERR "%s: Unable to set address byte 1\n",
- ap->name);
- result = -EIO;
- goto eeprom_read_error;
- }
-
- eeprom_start(regs);
- eeprom_prep(regs, EEPROM_READ_SELECT);
- if (eeprom_check_ack(regs)) {
- local_irq_restore(flags);
- printk(KERN_ERR "%s: Unable to set READ_SELECT\n",
- ap->name);
- result = -EIO;
- goto eeprom_read_error;
- }
-
- for (i = 0; i < 8; i++) {
- local = readl(®s->LocalCtrl);
- local &= ~EEPROM_WRITE_ENABLE;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- udelay(ACE_LONG_DELAY);
- mb();
- local |= EEPROM_CLK_OUT;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
- udelay(ACE_SHORT_DELAY);
- /* sample data mid high clk */
- result = (result << 1) |
- ((readl(®s->LocalCtrl) & EEPROM_DATA_IN) != 0);
- udelay(ACE_SHORT_DELAY);
- mb();
- local = readl(®s->LocalCtrl);
- local &= ~EEPROM_CLK_OUT;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- udelay(ACE_SHORT_DELAY);
- mb();
- if (i == 7) {
- local |= EEPROM_WRITE_ENABLE;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
- udelay(ACE_SHORT_DELAY);
- }
- }
-
- local |= EEPROM_DATA_OUT;
- writel(local, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
- udelay(ACE_SHORT_DELAY);
- writel(readl(®s->LocalCtrl) | EEPROM_CLK_OUT, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- udelay(ACE_LONG_DELAY);
- writel(readl(®s->LocalCtrl) & ~EEPROM_CLK_OUT, ®s->LocalCtrl);
- readl(®s->LocalCtrl);
- mb();
- udelay(ACE_SHORT_DELAY);
- eeprom_stop(regs);
-
- local_irq_restore(flags);
- out:
- return result;
-
- eeprom_read_error:
- printk(KERN_ERR "%s: Unable to read eeprom byte 0x%02lx\n",
- ap->name, offset);
- goto out;
-}
+++ /dev/null
-#ifndef _ACENIC_H_
-#define _ACENIC_H_
-#include <linux/interrupt.h>
-
-
-/*
- * Generate TX index update each time, when TX ring is closed.
- * Normally, this is not useful, because results in more dma (and irqs
- * without TX_COAL_INTS_ONLY).
- */
-#define USE_TX_COAL_NOW 0
-
-/*
- * Addressing:
- *
- * The Tigon uses 64-bit host addresses, regardless of their actual
- * length, and it expects a big-endian format. For 32 bit systems the
- * upper 32 bits of the address are simply ignored (zero), however for
- * little endian 64 bit systems (Alpha) this looks strange with the
- * two parts of the address word being swapped.
- *
- * The addresses are split in two 32 bit words for all architectures
- * as some of them are in PCI shared memory and it is necessary to use
- * readl/writel to access them.
- *
- * The addressing code is derived from Pete Wyckoff's work, but
- * modified to deal properly with readl/writel usage.
- */
-
-struct ace_regs {
- u32 pad0[16]; /* PCI control registers */
-
- u32 HostCtrl; /* 0x40 */
- u32 LocalCtrl;
-
- u32 pad1[2];
-
- u32 MiscCfg; /* 0x50 */
-
- u32 pad2[2];
-
- u32 PciState;
-
- u32 pad3[2]; /* 0x60 */
-
- u32 WinBase;
- u32 WinData;
-
- u32 pad4[12]; /* 0x70 */
-
- u32 DmaWriteState; /* 0xa0 */
- u32 pad5[3];
- u32 DmaReadState; /* 0xb0 */
-
- u32 pad6[26];
-
- u32 AssistState;
-
- u32 pad7[8]; /* 0x120 */
-
- u32 CpuCtrl; /* 0x140 */
- u32 Pc;
-
- u32 pad8[3];
-
- u32 SramAddr; /* 0x154 */
- u32 SramData;
-
- u32 pad9[49];
-
- u32 MacRxState; /* 0x220 */
-
- u32 pad10[7];
-
- u32 CpuBCtrl; /* 0x240 */
- u32 PcB;
-
- u32 pad11[3];
-
- u32 SramBAddr; /* 0x254 */
- u32 SramBData;
-
- u32 pad12[105];
-
- u32 pad13[32]; /* 0x400 */
- u32 Stats[32];
-
- u32 Mb0Hi; /* 0x500 */
- u32 Mb0Lo;
- u32 Mb1Hi;
- u32 CmdPrd;
- u32 Mb2Hi;
- u32 TxPrd;
- u32 Mb3Hi;
- u32 RxStdPrd;
- u32 Mb4Hi;
- u32 RxJumboPrd;
- u32 Mb5Hi;
- u32 RxMiniPrd;
- u32 Mb6Hi;
- u32 Mb6Lo;
- u32 Mb7Hi;
- u32 Mb7Lo;
- u32 Mb8Hi;
- u32 Mb8Lo;
- u32 Mb9Hi;
- u32 Mb9Lo;
- u32 MbAHi;
- u32 MbALo;
- u32 MbBHi;
- u32 MbBLo;
- u32 MbCHi;
- u32 MbCLo;
- u32 MbDHi;
- u32 MbDLo;
- u32 MbEHi;
- u32 MbELo;
- u32 MbFHi;
- u32 MbFLo;
-
- u32 pad14[32];
-
- u32 MacAddrHi; /* 0x600 */
- u32 MacAddrLo;
- u32 InfoPtrHi;
- u32 InfoPtrLo;
- u32 MultiCastHi; /* 0x610 */
- u32 MultiCastLo;
- u32 ModeStat;
- u32 DmaReadCfg;
- u32 DmaWriteCfg; /* 0x620 */
- u32 TxBufRat;
- u32 EvtCsm;
- u32 CmdCsm;
- u32 TuneRxCoalTicks;/* 0x630 */
- u32 TuneTxCoalTicks;
- u32 TuneStatTicks;
- u32 TuneMaxTxDesc;
- u32 TuneMaxRxDesc; /* 0x640 */
- u32 TuneTrace;
- u32 TuneLink;
- u32 TuneFastLink;
- u32 TracePtr; /* 0x650 */
- u32 TraceStrt;
- u32 TraceLen;
- u32 IfIdx;
- u32 IfMtu; /* 0x660 */
- u32 MaskInt;
- u32 GigLnkState;
- u32 FastLnkState;
- u32 pad16[4]; /* 0x670 */
- u32 RxRetCsm; /* 0x680 */
-
- u32 pad17[31];
-
- u32 CmdRng[64]; /* 0x700 */
- u32 Window[0x200];
-};
-
-
-typedef struct {
- u32 addrhi;
- u32 addrlo;
-} aceaddr;
-
-
-#define ACE_WINDOW_SIZE 0x800
-
-#define ACE_JUMBO_MTU 9000
-#define ACE_STD_MTU 1500
-
-#define ACE_TRACE_SIZE 0x8000
-
-/*
- * Host control register bits.
- */
-
-#define IN_INT 0x01
-#define CLR_INT 0x02
-#define HW_RESET 0x08
-#define BYTE_SWAP 0x10
-#define WORD_SWAP 0x20
-#define MASK_INTS 0x40
-
-/*
- * Local control register bits.
- */
-
-#define EEPROM_DATA_IN 0x800000
-#define EEPROM_DATA_OUT 0x400000
-#define EEPROM_WRITE_ENABLE 0x200000
-#define EEPROM_CLK_OUT 0x100000
-
-#define EEPROM_BASE 0xa0000000
-
-#define EEPROM_WRITE_SELECT 0xa0
-#define EEPROM_READ_SELECT 0xa1
-
-#define SRAM_BANK_512K 0x200
-
-
-/*
- * udelay() values for when clocking the eeprom
- */
-#define ACE_SHORT_DELAY 2
-#define ACE_LONG_DELAY 4
-
-
-/*
- * Misc Config bits
- */
-
-#define SYNC_SRAM_TIMING 0x100000
-
-
-/*
- * CPU state bits.
- */
-
-#define CPU_RESET 0x01
-#define CPU_TRACE 0x02
-#define CPU_PROM_FAILED 0x10
-#define CPU_HALT 0x00010000
-#define CPU_HALTED 0xffff0000
-
-
-/*
- * PCI State bits.
- */
-
-#define DMA_READ_MAX_4 0x04
-#define DMA_READ_MAX_16 0x08
-#define DMA_READ_MAX_32 0x0c
-#define DMA_READ_MAX_64 0x10
-#define DMA_READ_MAX_128 0x14
-#define DMA_READ_MAX_256 0x18
-#define DMA_READ_MAX_1K 0x1c
-#define DMA_WRITE_MAX_4 0x20
-#define DMA_WRITE_MAX_16 0x40
-#define DMA_WRITE_MAX_32 0x60
-#define DMA_WRITE_MAX_64 0x80
-#define DMA_WRITE_MAX_128 0xa0
-#define DMA_WRITE_MAX_256 0xc0
-#define DMA_WRITE_MAX_1K 0xe0
-#define DMA_READ_WRITE_MASK 0xfc
-#define MEM_READ_MULTIPLE 0x00020000
-#define PCI_66MHZ 0x00080000
-#define PCI_32BIT 0x00100000
-#define DMA_WRITE_ALL_ALIGN 0x00800000
-#define READ_CMD_MEM 0x06000000
-#define WRITE_CMD_MEM 0x70000000
-
-
-/*
- * Mode status
- */
-
-#define ACE_BYTE_SWAP_BD 0x02
-#define ACE_WORD_SWAP_BD 0x04 /* not actually used */
-#define ACE_WARN 0x08
-#define ACE_BYTE_SWAP_DMA 0x10
-#define ACE_NO_JUMBO_FRAG 0x200
-#define ACE_FATAL 0x40000000
-
-
-/*
- * DMA config
- */
-
-#define DMA_THRESH_1W 0x10
-#define DMA_THRESH_2W 0x20
-#define DMA_THRESH_4W 0x40
-#define DMA_THRESH_8W 0x80
-#define DMA_THRESH_16W 0x100
-#define DMA_THRESH_32W 0x0 /* not described in doc, but exists. */
-
-
-/*
- * Tuning parameters
- */
-
-#define TICKS_PER_SEC 1000000
-
-
-/*
- * Link bits
- */
-
-#define LNK_PREF 0x00008000
-#define LNK_10MB 0x00010000
-#define LNK_100MB 0x00020000
-#define LNK_1000MB 0x00040000
-#define LNK_FULL_DUPLEX 0x00080000
-#define LNK_HALF_DUPLEX 0x00100000
-#define LNK_TX_FLOW_CTL_Y 0x00200000
-#define LNK_NEG_ADVANCED 0x00400000
-#define LNK_RX_FLOW_CTL_Y 0x00800000
-#define LNK_NIC 0x01000000
-#define LNK_JAM 0x02000000
-#define LNK_JUMBO 0x04000000
-#define LNK_ALTEON 0x08000000
-#define LNK_NEG_FCTL 0x10000000
-#define LNK_NEGOTIATE 0x20000000
-#define LNK_ENABLE 0x40000000
-#define LNK_UP 0x80000000
-
-
-/*
- * Event definitions
- */
-
-#define EVT_RING_ENTRIES 256
-#define EVT_RING_SIZE (EVT_RING_ENTRIES * sizeof(struct event))
-
-struct event {
-#ifdef __LITTLE_ENDIAN_BITFIELD
- u32 idx:12;
- u32 code:12;
- u32 evt:8;
-#else
- u32 evt:8;
- u32 code:12;
- u32 idx:12;
-#endif
- u32 pad;
-};
-
-
-/*
- * Events
- */
-
-#define E_FW_RUNNING 0x01
-#define E_STATS_UPDATED 0x04
-
-#define E_STATS_UPDATE 0x04
-
-#define E_LNK_STATE 0x06
-#define E_C_LINK_UP 0x01
-#define E_C_LINK_DOWN 0x02
-#define E_C_LINK_10_100 0x03
-
-#define E_ERROR 0x07
-#define E_C_ERR_INVAL_CMD 0x01
-#define E_C_ERR_UNIMP_CMD 0x02
-#define E_C_ERR_BAD_CFG 0x03
-
-#define E_MCAST_LIST 0x08
-#define E_C_MCAST_ADDR_ADD 0x01
-#define E_C_MCAST_ADDR_DEL 0x02
-
-#define E_RESET_JUMBO_RNG 0x09
-
-
-/*
- * Commands
- */
-
-#define CMD_RING_ENTRIES 64
-
-struct cmd {
-#ifdef __LITTLE_ENDIAN_BITFIELD
- u32 idx:12;
- u32 code:12;
- u32 evt:8;
-#else
- u32 evt:8;
- u32 code:12;
- u32 idx:12;
-#endif
-};
-
-
-#define C_HOST_STATE 0x01
-#define C_C_STACK_UP 0x01
-#define C_C_STACK_DOWN 0x02
-
-#define C_FDR_FILTERING 0x02
-#define C_C_FDR_FILT_ENABLE 0x01
-#define C_C_FDR_FILT_DISABLE 0x02
-
-#define C_SET_RX_PRD_IDX 0x03
-#define C_UPDATE_STATS 0x04
-#define C_RESET_JUMBO_RNG 0x05
-#define C_ADD_MULTICAST_ADDR 0x08
-#define C_DEL_MULTICAST_ADDR 0x09
-
-#define C_SET_PROMISC_MODE 0x0a
-#define C_C_PROMISC_ENABLE 0x01
-#define C_C_PROMISC_DISABLE 0x02
-
-#define C_LNK_NEGOTIATION 0x0b
-#define C_C_NEGOTIATE_BOTH 0x00
-#define C_C_NEGOTIATE_GIG 0x01
-#define C_C_NEGOTIATE_10_100 0x02
-
-#define C_SET_MAC_ADDR 0x0c
-#define C_CLEAR_PROFILE 0x0d
-
-#define C_SET_MULTICAST_MODE 0x0e
-#define C_C_MCAST_ENABLE 0x01
-#define C_C_MCAST_DISABLE 0x02
-
-#define C_CLEAR_STATS 0x0f
-#define C_SET_RX_JUMBO_PRD_IDX 0x10
-#define C_REFRESH_STATS 0x11
-
-
-/*
- * Descriptor flags
- */
-#define BD_FLG_TCP_UDP_SUM 0x01
-#define BD_FLG_IP_SUM 0x02
-#define BD_FLG_END 0x04
-#define BD_FLG_MORE 0x08
-#define BD_FLG_JUMBO 0x10
-#define BD_FLG_UCAST 0x20
-#define BD_FLG_MCAST 0x40
-#define BD_FLG_BCAST 0x60
-#define BD_FLG_TYP_MASK 0x60
-#define BD_FLG_IP_FRAG 0x80
-#define BD_FLG_IP_FRAG_END 0x100
-#define BD_FLG_VLAN_TAG 0x200
-#define BD_FLG_FRAME_ERROR 0x400
-#define BD_FLG_COAL_NOW 0x800
-#define BD_FLG_MINI 0x1000
-
-
-/*
- * Ring Control block flags
- */
-#define RCB_FLG_TCP_UDP_SUM 0x01
-#define RCB_FLG_IP_SUM 0x02
-#define RCB_FLG_NO_PSEUDO_HDR 0x08
-#define RCB_FLG_VLAN_ASSIST 0x10
-#define RCB_FLG_COAL_INT_ONLY 0x20
-#define RCB_FLG_TX_HOST_RING 0x40
-#define RCB_FLG_IEEE_SNAP_SUM 0x80
-#define RCB_FLG_EXT_RX_BD 0x100
-#define RCB_FLG_RNG_DISABLE 0x200
-
-
-/*
- * TX ring - maximum TX ring entries for Tigon I's is 128
- */
-#define MAX_TX_RING_ENTRIES 256
-#define TIGON_I_TX_RING_ENTRIES 128
-#define TX_RING_SIZE (MAX_TX_RING_ENTRIES * sizeof(struct tx_desc))
-#define TX_RING_BASE 0x3800
-
-struct tx_desc{
- aceaddr addr;
- u32 flagsize;
-#if 0
-/*
- * This is in PCI shared mem and must be accessed with readl/writel
- * real layout is:
- */
-#if __LITTLE_ENDIAN
- u16 flags;
- u16 size;
- u16 vlan;
- u16 reserved;
-#else
- u16 size;
- u16 flags;
- u16 reserved;
- u16 vlan;
-#endif
-#endif
- u32 vlanres;
-};
-
-
-#define RX_STD_RING_ENTRIES 512
-#define RX_STD_RING_SIZE (RX_STD_RING_ENTRIES * sizeof(struct rx_desc))
-
-#define RX_JUMBO_RING_ENTRIES 256
-#define RX_JUMBO_RING_SIZE (RX_JUMBO_RING_ENTRIES *sizeof(struct rx_desc))
-
-#define RX_MINI_RING_ENTRIES 1024
-#define RX_MINI_RING_SIZE (RX_MINI_RING_ENTRIES *sizeof(struct rx_desc))
-
-#define RX_RETURN_RING_ENTRIES 2048
-#define RX_RETURN_RING_SIZE (RX_MAX_RETURN_RING_ENTRIES * \
- sizeof(struct rx_desc))
-
-struct rx_desc{
- aceaddr addr;
-#ifdef __LITTLE_ENDIAN
- u16 size;
- u16 idx;
-#else
- u16 idx;
- u16 size;
-#endif
-#ifdef __LITTLE_ENDIAN
- u16 flags;
- u16 type;
-#else
- u16 type;
- u16 flags;
-#endif
-#ifdef __LITTLE_ENDIAN
- u16 tcp_udp_csum;
- u16 ip_csum;
-#else
- u16 ip_csum;
- u16 tcp_udp_csum;
-#endif
-#ifdef __LITTLE_ENDIAN
- u16 vlan;
- u16 err_flags;
-#else
- u16 err_flags;
- u16 vlan;
-#endif
- u32 reserved;
- u32 opague;
-};
-
-
-/*
- * This struct is shared with the NIC firmware.
- */
-struct ring_ctrl {
- aceaddr rngptr;
-#ifdef __LITTLE_ENDIAN
- u16 flags;
- u16 max_len;
-#else
- u16 max_len;
- u16 flags;
-#endif
- u32 pad;
-};
-
-
-struct ace_mac_stats {
- u32 excess_colls;
- u32 coll_1;
- u32 coll_2;
- u32 coll_3;
- u32 coll_4;
- u32 coll_5;
- u32 coll_6;
- u32 coll_7;
- u32 coll_8;
- u32 coll_9;
- u32 coll_10;
- u32 coll_11;
- u32 coll_12;
- u32 coll_13;
- u32 coll_14;
- u32 coll_15;
- u32 late_coll;
- u32 defers;
- u32 crc_err;
- u32 underrun;
- u32 crs_err;
- u32 pad[3];
- u32 drop_ula;
- u32 drop_mc;
- u32 drop_fc;
- u32 drop_space;
- u32 coll;
- u32 kept_bc;
- u32 kept_mc;
- u32 kept_uc;
-};
-
-
-struct ace_info {
- union {
- u32 stats[256];
- } s;
- struct ring_ctrl evt_ctrl;
- struct ring_ctrl cmd_ctrl;
- struct ring_ctrl tx_ctrl;
- struct ring_ctrl rx_std_ctrl;
- struct ring_ctrl rx_jumbo_ctrl;
- struct ring_ctrl rx_mini_ctrl;
- struct ring_ctrl rx_return_ctrl;
- aceaddr evt_prd_ptr;
- aceaddr rx_ret_prd_ptr;
- aceaddr tx_csm_ptr;
- aceaddr stats2_ptr;
-};
-
-
-struct ring_info {
- struct sk_buff *skb;
- DEFINE_DMA_UNMAP_ADDR(mapping);
-};
-
-
-/*
- * Funny... As soon as we add maplen on alpha, it starts to work
- * much slower. Hmm... is it because struct does not fit to one cacheline?
- * So, split tx_ring_info.
- */
-struct tx_ring_info {
- struct sk_buff *skb;
- DEFINE_DMA_UNMAP_ADDR(mapping);
- DEFINE_DMA_UNMAP_LEN(maplen);
-};
-
-
-/*
- * struct ace_skb holding the rings of skb's. This is an awful lot of
- * pointers, but I don't see any other smart mode to do this in an
- * efficient manner ;-(
- */
-struct ace_skb
-{
- struct tx_ring_info tx_skbuff[MAX_TX_RING_ENTRIES];
- struct ring_info rx_std_skbuff[RX_STD_RING_ENTRIES];
- struct ring_info rx_mini_skbuff[RX_MINI_RING_ENTRIES];
- struct ring_info rx_jumbo_skbuff[RX_JUMBO_RING_ENTRIES];
-};
-
-
-/*
- * Struct private for the AceNIC.
- *
- * Elements are grouped so variables used by the tx handling goes
- * together, and will go into the same cache lines etc. in order to
- * avoid cache line contention between the rx and tx handling on SMP.
- *
- * Frequently accessed variables are put at the beginning of the
- * struct to help the compiler generate better/shorter code.
- */
-struct ace_private
-{
- struct ace_info *info;
- struct ace_regs __iomem *regs; /* register base */
- struct ace_skb *skb;
- dma_addr_t info_dma; /* 32/64 bit */
-
- int version, link;
- int promisc, mcast_all;
-
- /*
- * TX elements
- */
- struct tx_desc *tx_ring;
- u32 tx_prd;
- volatile u32 tx_ret_csm;
- int tx_ring_entries;
-
- /*
- * RX elements
- */
- unsigned long std_refill_busy
- __attribute__ ((aligned (SMP_CACHE_BYTES)));
- unsigned long mini_refill_busy, jumbo_refill_busy;
- atomic_t cur_rx_bufs;
- atomic_t cur_mini_bufs;
- atomic_t cur_jumbo_bufs;
- u32 rx_std_skbprd, rx_mini_skbprd, rx_jumbo_skbprd;
- u32 cur_rx;
-
- struct rx_desc *rx_std_ring;
- struct rx_desc *rx_jumbo_ring;
- struct rx_desc *rx_mini_ring;
- struct rx_desc *rx_return_ring;
-
- int tasklet_pending, jumbo;
- struct tasklet_struct ace_tasklet;
-
- struct event *evt_ring;
-
- volatile u32 *evt_prd, *rx_ret_prd, *tx_csm;
-
- dma_addr_t tx_ring_dma; /* 32/64 bit */
- dma_addr_t rx_ring_base_dma;
- dma_addr_t evt_ring_dma;
- dma_addr_t evt_prd_dma, rx_ret_prd_dma, tx_csm_dma;
-
- unsigned char *trace_buf;
- struct pci_dev *pdev;
- struct net_device *next;
- volatile int fw_running;
- int board_idx;
- u16 pci_command;
- u8 pci_latency;
- const char *name;
-#ifdef INDEX_DEBUG
- spinlock_t debug_lock
- __attribute__ ((aligned (SMP_CACHE_BYTES)));
- u32 last_tx, last_std_rx, last_mini_rx;
-#endif
- int pci_using_dac;
- u8 firmware_major;
- u8 firmware_minor;
- u8 firmware_fix;
- u32 firmware_start;
-};
-
-
-#define TX_RESERVED MAX_SKB_FRAGS
-
-static inline int tx_space (struct ace_private *ap, u32 csm, u32 prd)
-{
- return (csm - prd - 1) & (ACE_TX_RING_ENTRIES(ap) - 1);
-}
-
-#define tx_free(ap) tx_space((ap)->tx_ret_csm, (ap)->tx_prd, ap)
-#define tx_ring_full(ap, csm, prd) (tx_space(ap, csm, prd) <= TX_RESERVED)
-
-static inline void set_aceaddr(aceaddr *aa, dma_addr_t addr)
-{
- u64 baddr = (u64) addr;
- aa->addrlo = baddr & 0xffffffff;
- aa->addrhi = baddr >> 32;
- wmb();
-}
-
-
-static inline void ace_set_txprd(struct ace_regs __iomem *regs,
- struct ace_private *ap, u32 value)
-{
-#ifdef INDEX_DEBUG
- unsigned long flags;
- spin_lock_irqsave(&ap->debug_lock, flags);
- writel(value, ®s->TxPrd);
- if (value == ap->last_tx)
- printk(KERN_ERR "AceNIC RACE ALERT! writing identical value "
- "to tx producer (%i)\n", value);
- ap->last_tx = value;
- spin_unlock_irqrestore(&ap->debug_lock, flags);
-#else
- writel(value, ®s->TxPrd);
-#endif
- wmb();
-}
-
-
-static inline void ace_mask_irq(struct net_device *dev)
-{
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
-
- if (ACE_IS_TIGON_I(ap))
- writel(1, ®s->MaskInt);
- else
- writel(readl(®s->HostCtrl) | MASK_INTS, ®s->HostCtrl);
-
- ace_sync_irq(dev->irq);
-}
-
-
-static inline void ace_unmask_irq(struct net_device *dev)
-{
- struct ace_private *ap = netdev_priv(dev);
- struct ace_regs __iomem *regs = ap->regs;
-
- if (ACE_IS_TIGON_I(ap))
- writel(0, ®s->MaskInt);
- else
- writel(readl(®s->HostCtrl) & ~MASK_INTS, ®s->HostCtrl);
-}
-
-
-/*
- * Prototypes
- */
-static int ace_init(struct net_device *dev);
-static void ace_load_std_rx_ring(struct net_device *dev, int nr_bufs);
-static void ace_load_mini_rx_ring(struct net_device *dev, int nr_bufs);
-static void ace_load_jumbo_rx_ring(struct net_device *dev, int nr_bufs);
-static irqreturn_t ace_interrupt(int irq, void *dev_id);
-static int ace_load_firmware(struct net_device *dev);
-static int ace_open(struct net_device *dev);
-static netdev_tx_t ace_start_xmit(struct sk_buff *skb,
- struct net_device *dev);
-static int ace_close(struct net_device *dev);
-static void ace_tasklet(unsigned long dev);
-static void ace_dump_trace(struct ace_private *ap);
-static void ace_set_multicast_list(struct net_device *dev);
-static int ace_change_mtu(struct net_device *dev, int new_mtu);
-static int ace_set_mac_addr(struct net_device *dev, void *p);
-static void ace_set_rxtx_parms(struct net_device *dev, int jumbo);
-static int ace_allocate_descriptors(struct net_device *dev);
-static void ace_free_descriptors(struct net_device *dev);
-static void ace_init_cleanup(struct net_device *dev);
-static struct net_device_stats *ace_get_stats(struct net_device *dev);
-static int read_eeprom_byte(struct net_device *dev, unsigned long offset);
-
-#endif /* _ACENIC_H_ */
--- /dev/null
+/* 3c501.c: A 3Com 3c501 Ethernet driver for Linux. */
+/*
+ Written 1992,1993,1994 Donald Becker
+
+ Copyright 1993 United States Government as represented by the
+ Director, National Security Agency. This software may be used and
+ distributed according to the terms of the GNU General Public License,
+ incorporated herein by reference.
+
+ This is a device driver for the 3Com Etherlink 3c501.
+ Do not purchase this card, even as a joke. It's performance is horrible,
+ and it breaks in many ways.
+
+ The original author may be reached as becker@scyld.com, or C/O
+ Scyld Computing Corporation
+ 410 Severn Ave., Suite 210
+ Annapolis MD 21403
+
+ Fixed (again!) the missing interrupt locking on TX/RX shifting.
+ Alan Cox <alan@lxorguk.ukuu.org.uk>
+
+ Removed calls to init_etherdev since they are no longer needed, and
+ cleaned up modularization just a bit. The driver still allows only
+ the default address for cards when loaded as a module, but that's
+ really less braindead than anyone using a 3c501 board. :)
+ 19950208 (invid@msen.com)
+
+ Added traps for interrupts hitting the window as we clear and TX load
+ the board. Now getting 150K/second FTP with a 3c501 card. Still playing
+ with a TX-TX optimisation to see if we can touch 180-200K/second as seems
+ theoretically maximum.
+ 19950402 Alan Cox <alan@lxorguk.ukuu.org.uk>
+
+ Cleaned up for 2.3.x because we broke SMP now.
+ 20000208 Alan Cox <alan@lxorguk.ukuu.org.uk>
+
+ Check up pass for 2.5. Nothing significant changed
+ 20021009 Alan Cox <alan@lxorguk.ukuu.org.uk>
+
+ Fixed zero fill corner case
+ 20030104 Alan Cox <alan@lxorguk.ukuu.org.uk>
+
+
+ For the avoidance of doubt the "preferred form" of this code is one which
+ is in an open non patent encumbered format. Where cryptographic key signing
+ forms part of the process of creating an executable the information
+ including keys needed to generate an equivalently functional executable
+ are deemed to be part of the source code.
+
+*/
+
+
+/**
+ * DOC: 3c501 Card Notes
+ *
+ * Some notes on this thing if you have to hack it. [Alan]
+ *
+ * Some documentation is available from 3Com. Due to the boards age
+ * standard responses when you ask for this will range from 'be serious'
+ * to 'give it to a museum'. The documentation is incomplete and mostly
+ * of historical interest anyway.
+ *
+ * The basic system is a single buffer which can be used to receive or
+ * transmit a packet. A third command mode exists when you are setting
+ * things up.
+ *
+ * If it's transmitting it's not receiving and vice versa. In fact the
+ * time to get the board back into useful state after an operation is
+ * quite large.
+ *
+ * The driver works by keeping the board in receive mode waiting for a
+ * packet to arrive. When one arrives it is copied out of the buffer
+ * and delivered to the kernel. The card is reloaded and off we go.
+ *
+ * When transmitting lp->txing is set and the card is reset (from
+ * receive mode) [possibly losing a packet just received] to command
+ * mode. A packet is loaded and transmit mode triggered. The interrupt
+ * handler runs different code for transmit interrupts and can handle
+ * returning to receive mode or retransmissions (yes you have to help
+ * out with those too).
+ *
+ * DOC: Problems
+ *
+ * There are a wide variety of undocumented error returns from the card
+ * and you basically have to kick the board and pray if they turn up. Most
+ * only occur under extreme load or if you do something the board doesn't
+ * like (eg touching a register at the wrong time).
+ *
+ * The driver is less efficient than it could be. It switches through
+ * receive mode even if more transmits are queued. If this worries you buy
+ * a real Ethernet card.
+ *
+ * The combination of slow receive restart and no real multicast
+ * filter makes the board unusable with a kernel compiled for IP
+ * multicasting in a real multicast environment. That's down to the board,
+ * but even with no multicast programs running a multicast IP kernel is
+ * in group 224.0.0.1 and you will therefore be listening to all multicasts.
+ * One nv conference running over that Ethernet and you can give up.
+ *
+ */
+
+#define DRV_NAME "3c501"
+#define DRV_VERSION "2002/10/09"
+
+
+static const char version[] =
+ DRV_NAME ".c: " DRV_VERSION " Alan Cox (alan@lxorguk.ukuu.org.uk).\n";
+
+/*
+ * Braindamage remaining:
+ * The 3c501 board.
+ */
+
+#include <linux/module.h>
+
+#include <linux/kernel.h>
+#include <linux/fcntl.h>
+#include <linux/ioport.h>
+#include <linux/interrupt.h>
+#include <linux/string.h>
+#include <linux/errno.h>
+#include <linux/spinlock.h>
+#include <linux/ethtool.h>
+#include <linux/delay.h>
+#include <linux/bitops.h>
+
+#include <asm/uaccess.h>
+#include <asm/io.h>
+
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/init.h>
+
+#include "3c501.h"
+
+/*
+ * The boilerplate probe code.
+ */
+
+static int io = 0x280;
+static int irq = 5;
+static int mem_start;
+
+/**
+ * el1_probe: - probe for a 3c501
+ * @dev: The device structure passed in to probe.
+ *
+ * This can be called from two places. The network layer will probe using
+ * a device structure passed in with the probe information completed. For a
+ * modular driver we use #init_module to fill in our own structure and probe
+ * for it.
+ *
+ * Returns 0 on success. ENXIO if asked not to probe and ENODEV if asked to
+ * probe and failing to find anything.
+ */
+
+struct net_device * __init el1_probe(int unit)
+{
+ struct net_device *dev = alloc_etherdev(sizeof(struct net_local));
+ static const unsigned ports[] = { 0x280, 0x300, 0};
+ const unsigned *port;
+ int err = 0;
+
+ if (!dev)
+ return ERR_PTR(-ENOMEM);
+
+ if (unit >= 0) {
+ sprintf(dev->name, "eth%d", unit);
+ netdev_boot_setup_check(dev);
+ io = dev->base_addr;
+ irq = dev->irq;
+ mem_start = dev->mem_start & 7;
+ }
+
+ if (io > 0x1ff) { /* Check a single specified location. */
+ err = el1_probe1(dev, io);
+ } else if (io != 0) {
+ err = -ENXIO; /* Don't probe at all. */
+ } else {
+ for (port = ports; *port && el1_probe1(dev, *port); port++)
+ ;
+ if (!*port)
+ err = -ENODEV;
+ }
+ if (err)
+ goto out;
+ err = register_netdev(dev);
+ if (err)
+ goto out1;
+ return dev;
+out1:
+ release_region(dev->base_addr, EL1_IO_EXTENT);
+out:
+ free_netdev(dev);
+ return ERR_PTR(err);
+}
+
+static const struct net_device_ops el_netdev_ops = {
+ .ndo_open = el_open,
+ .ndo_stop = el1_close,
+ .ndo_start_xmit = el_start_xmit,
+ .ndo_tx_timeout = el_timeout,
+ .ndo_set_multicast_list = set_multicast_list,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+};
+
+/**
+ * el1_probe1:
+ * @dev: The device structure to use
+ * @ioaddr: An I/O address to probe at.
+ *
+ * The actual probe. This is iterated over by #el1_probe in order to
+ * check all the applicable device locations.
+ *
+ * Returns 0 for a success, in which case the device is activated,
+ * EAGAIN if the IRQ is in use by another driver, and ENODEV if the
+ * board cannot be found.
+ */
+
+static int __init el1_probe1(struct net_device *dev, int ioaddr)
+{
+ struct net_local *lp;
+ const char *mname; /* Vendor name */
+ unsigned char station_addr[6];
+ int autoirq = 0;
+ int i;
+
+ /*
+ * Reserve I/O resource for exclusive use by this driver
+ */
+
+ if (!request_region(ioaddr, EL1_IO_EXTENT, DRV_NAME))
+ return -ENODEV;
+
+ /*
+ * Read the station address PROM data from the special port.
+ */
+
+ for (i = 0; i < 6; i++) {
+ outw(i, ioaddr + EL1_DATAPTR);
+ station_addr[i] = inb(ioaddr + EL1_SAPROM);
+ }
+ /*
+ * Check the first three octets of the S.A. for 3Com's prefix, or
+ * for the Sager NP943 prefix.
+ */
+
+ if (station_addr[0] == 0x02 && station_addr[1] == 0x60 &&
+ station_addr[2] == 0x8c)
+ mname = "3c501";
+ else if (station_addr[0] == 0x00 && station_addr[1] == 0x80 &&
+ station_addr[2] == 0xC8)
+ mname = "NP943";
+ else {
+ release_region(ioaddr, EL1_IO_EXTENT);
+ return -ENODEV;
+ }
+
+ /*
+ * We auto-IRQ by shutting off the interrupt line and letting it
+ * float high.
+ */
+
+ dev->irq = irq;
+
+ if (dev->irq < 2) {
+ unsigned long irq_mask;
+
+ irq_mask = probe_irq_on();
+ inb(RX_STATUS); /* Clear pending interrupts. */
+ inb(TX_STATUS);
+ outb(AX_LOOP + 1, AX_CMD);
+
+ outb(0x00, AX_CMD);
+
+ mdelay(20);
+ autoirq = probe_irq_off(irq_mask);
+
+ if (autoirq == 0) {
+ pr_warning("%s probe at %#x failed to detect IRQ line.\n",
+ mname, ioaddr);
+ release_region(ioaddr, EL1_IO_EXTENT);
+ return -EAGAIN;
+ }
+ }
+
+ outb(AX_RESET+AX_LOOP, AX_CMD); /* Loopback mode. */
+ dev->base_addr = ioaddr;
+ memcpy(dev->dev_addr, station_addr, ETH_ALEN);
+
+ if (mem_start & 0xf)
+ el_debug = mem_start & 0x7;
+ if (autoirq)
+ dev->irq = autoirq;
+
+ pr_info("%s: %s EtherLink at %#lx, using %sIRQ %d.\n",
+ dev->name, mname, dev->base_addr,
+ autoirq ? "auto":"assigned ", dev->irq);
+
+#ifdef CONFIG_IP_MULTICAST
+ pr_warning("WARNING: Use of the 3c501 in a multicast kernel is NOT recommended.\n");
+#endif
+
+ if (el_debug)
+ pr_debug("%s", version);
+
+ lp = netdev_priv(dev);
+ memset(lp, 0, sizeof(struct net_local));
+ spin_lock_init(&lp->lock);
+
+ /*
+ * The EL1-specific entries in the device structure.
+ */
+
+ dev->netdev_ops = &el_netdev_ops;
+ dev->watchdog_timeo = HZ;
+ dev->ethtool_ops = &netdev_ethtool_ops;
+ return 0;
+}
+
+/**
+ * el1_open:
+ * @dev: device that is being opened
+ *
+ * When an ifconfig is issued which changes the device flags to include
+ * IFF_UP this function is called. It is only called when the change
+ * occurs, not when the interface remains up. #el1_close will be called
+ * when it goes down.
+ *
+ * Returns 0 for a successful open, or -EAGAIN if someone has run off
+ * with our interrupt line.
+ */
+
+static int el_open(struct net_device *dev)
+{
+ int retval;
+ int ioaddr = dev->base_addr;
+ struct net_local *lp = netdev_priv(dev);
+ unsigned long flags;
+
+ if (el_debug > 2)
+ pr_debug("%s: Doing el_open()...\n", dev->name);
+
+ retval = request_irq(dev->irq, el_interrupt, 0, dev->name, dev);
+ if (retval)
+ return retval;
+
+ spin_lock_irqsave(&lp->lock, flags);
+ el_reset(dev);
+ spin_unlock_irqrestore(&lp->lock, flags);
+
+ lp->txing = 0; /* Board in RX mode */
+ outb(AX_RX, AX_CMD); /* Aux control, irq and receive enabled */
+ netif_start_queue(dev);
+ return 0;
+}
+
+/**
+ * el_timeout:
+ * @dev: The 3c501 card that has timed out
+ *
+ * Attempt to restart the board. This is basically a mixture of extreme
+ * violence and prayer
+ *
+ */
+
+static void el_timeout(struct net_device *dev)
+{
+ struct net_local *lp = netdev_priv(dev);
+ int ioaddr = dev->base_addr;
+
+ if (el_debug)
+ pr_debug("%s: transmit timed out, txsr %#2x axsr=%02x rxsr=%02x.\n",
+ dev->name, inb(TX_STATUS),
+ inb(AX_STATUS), inb(RX_STATUS));
+ dev->stats.tx_errors++;
+ outb(TX_NORM, TX_CMD);
+ outb(RX_NORM, RX_CMD);
+ outb(AX_OFF, AX_CMD); /* Just trigger a false interrupt. */
+ outb(AX_RX, AX_CMD); /* Aux control, irq and receive enabled */
+ lp->txing = 0; /* Ripped back in to RX */
+ netif_wake_queue(dev);
+}
+
+
+/**
+ * el_start_xmit:
+ * @skb: The packet that is queued to be sent
+ * @dev: The 3c501 card we want to throw it down
+ *
+ * Attempt to send a packet to a 3c501 card. There are some interesting
+ * catches here because the 3c501 is an extremely old and therefore
+ * stupid piece of technology.
+ *
+ * If we are handling an interrupt on the other CPU we cannot load a packet
+ * as we may still be attempting to retrieve the last RX packet buffer.
+ *
+ * When a transmit times out we dump the card into control mode and just
+ * start again. It happens enough that it isn't worth logging.
+ *
+ * We avoid holding the spin locks when doing the packet load to the board.
+ * The device is very slow, and its DMA mode is even slower. If we held the
+ * lock while loading 1500 bytes onto the controller we would drop a lot of
+ * serial port characters. This requires we do extra locking, but we have
+ * no real choice.
+ */
+
+static netdev_tx_t el_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct net_local *lp = netdev_priv(dev);
+ int ioaddr = dev->base_addr;
+ unsigned long flags;
+
+ /*
+ * Avoid incoming interrupts between us flipping txing and flipping
+ * mode as the driver assumes txing is a faithful indicator of card
+ * state
+ */
+
+ spin_lock_irqsave(&lp->lock, flags);
+
+ /*
+ * Avoid timer-based retransmission conflicts.
+ */
+
+ netif_stop_queue(dev);
+
+ do {
+ int len = skb->len;
+ int pad = 0;
+ int gp_start;
+ unsigned char *buf = skb->data;
+
+ if (len < ETH_ZLEN)
+ pad = ETH_ZLEN - len;
+
+ gp_start = 0x800 - (len + pad);
+
+ lp->tx_pkt_start = gp_start;
+ lp->collisions = 0;
+
+ dev->stats.tx_bytes += skb->len;
+
+ /*
+ * Command mode with status cleared should [in theory]
+ * mean no more interrupts can be pending on the card.
+ */
+
+ outb_p(AX_SYS, AX_CMD);
+ inb_p(RX_STATUS);
+ inb_p(TX_STATUS);
+
+ lp->loading = 1;
+ lp->txing = 1;
+
+ /*
+ * Turn interrupts back on while we spend a pleasant
+ * afternoon loading bytes into the board
+ */
+
+ spin_unlock_irqrestore(&lp->lock, flags);
+
+ /* Set rx packet area to 0. */
+ outw(0x00, RX_BUF_CLR);
+ /* aim - packet will be loaded into buffer start */
+ outw(gp_start, GP_LOW);
+ /* load buffer (usual thing each byte increments the pointer) */
+ outsb(DATAPORT, buf, len);
+ if (pad) {
+ while (pad--) /* Zero fill buffer tail */
+ outb(0, DATAPORT);
+ }
+ /* the board reuses the same register */
+ outw(gp_start, GP_LOW);
+
+ if (lp->loading != 2) {
+ /* fire ... Trigger xmit. */
+ outb(AX_XMIT, AX_CMD);
+ lp->loading = 0;
+ if (el_debug > 2)
+ pr_debug(" queued xmit.\n");
+ dev_kfree_skb(skb);
+ return NETDEV_TX_OK;
+ }
+ /* A receive upset our load, despite our best efforts */
+ if (el_debug > 2)
+ pr_debug("%s: burped during tx load.\n", dev->name);
+ spin_lock_irqsave(&lp->lock, flags);
+ } while (1);
+}
+
+/**
+ * el_interrupt:
+ * @irq: Interrupt number
+ * @dev_id: The 3c501 that burped
+ *
+ * Handle the ether interface interrupts. The 3c501 needs a lot more
+ * hand holding than most cards. In particular we get a transmit interrupt
+ * with a collision error because the board firmware isn't capable of rewinding
+ * its own transmit buffer pointers. It can however count to 16 for us.
+ *
+ * On the receive side the card is also very dumb. It has no buffering to
+ * speak of. We simply pull the packet out of its PIO buffer (which is slow)
+ * and queue it for the kernel. Then we reset the card for the next packet.
+ *
+ * We sometimes get surprise interrupts late both because the SMP IRQ delivery
+ * is message passing and because the card sometimes seems to deliver late. I
+ * think if it is part way through a receive and the mode is changed it carries
+ * on receiving and sends us an interrupt. We have to band aid all these cases
+ * to get a sensible 150kBytes/second performance. Even then you want a small
+ * TCP window.
+ */
+
+static irqreturn_t el_interrupt(int irq, void *dev_id)
+{
+ struct net_device *dev = dev_id;
+ struct net_local *lp;
+ int ioaddr;
+ int axsr; /* Aux. status reg. */
+
+ ioaddr = dev->base_addr;
+ lp = netdev_priv(dev);
+
+ spin_lock(&lp->lock);
+
+ /*
+ * What happened ?
+ */
+
+ axsr = inb(AX_STATUS);
+
+ /*
+ * Log it
+ */
+
+ if (el_debug > 3)
+ pr_debug("%s: el_interrupt() aux=%#02x\n", dev->name, axsr);
+
+ if (lp->loading == 1 && !lp->txing)
+ pr_warning("%s: Inconsistent state loading while not in tx\n",
+ dev->name);
+
+ if (lp->txing) {
+ /*
+ * Board in transmit mode. May be loading. If we are
+ * loading we shouldn't have got this.
+ */
+ int txsr = inb(TX_STATUS);
+
+ if (lp->loading == 1) {
+ if (el_debug > 2)
+ pr_debug("%s: Interrupt while loading [txsr=%02x gp=%04x rp=%04x]\n",
+ dev->name, txsr, inw(GP_LOW), inw(RX_LOW));
+
+ /* Force a reload */
+ lp->loading = 2;
+ spin_unlock(&lp->lock);
+ goto out;
+ }
+ if (el_debug > 6)
+ pr_debug("%s: txsr=%02x gp=%04x rp=%04x\n", dev->name,
+ txsr, inw(GP_LOW), inw(RX_LOW));
+
+ if ((axsr & 0x80) && (txsr & TX_READY) == 0) {
+ /*
+ * FIXME: is there a logic to whether to keep
+ * on trying or reset immediately ?
+ */
+ if (el_debug > 1)
+ pr_debug("%s: Unusual interrupt during Tx, txsr=%02x axsr=%02x gp=%03x rp=%03x.\n",
+ dev->name, txsr, axsr,
+ inw(ioaddr + EL1_DATAPTR),
+ inw(ioaddr + EL1_RXPTR));
+ lp->txing = 0;
+ netif_wake_queue(dev);
+ } else if (txsr & TX_16COLLISIONS) {
+ /*
+ * Timed out
+ */
+ if (el_debug)
+ pr_debug("%s: Transmit failed 16 times, Ethernet jammed?\n", dev->name);
+ outb(AX_SYS, AX_CMD);
+ lp->txing = 0;
+ dev->stats.tx_aborted_errors++;
+ netif_wake_queue(dev);
+ } else if (txsr & TX_COLLISION) {
+ /*
+ * Retrigger xmit.
+ */
+
+ if (el_debug > 6)
+ pr_debug("%s: retransmitting after a collision.\n", dev->name);
+ /*
+ * Poor little chip can't reset its own start
+ * pointer
+ */
+
+ outb(AX_SYS, AX_CMD);
+ outw(lp->tx_pkt_start, GP_LOW);
+ outb(AX_XMIT, AX_CMD);
+ dev->stats.collisions++;
+ spin_unlock(&lp->lock);
+ goto out;
+ } else {
+ /*
+ * It worked.. we will now fall through and receive
+ */
+ dev->stats.tx_packets++;
+ if (el_debug > 6)
+ pr_debug("%s: Tx succeeded %s\n", dev->name,
+ (txsr & TX_RDY) ? "." : "but tx is busy!");
+ /*
+ * This is safe the interrupt is atomic WRT itself.
+ */
+ lp->txing = 0;
+ /* In case more to transmit */
+ netif_wake_queue(dev);
+ }
+ } else {
+ /*
+ * In receive mode.
+ */
+
+ int rxsr = inb(RX_STATUS);
+ if (el_debug > 5)
+ pr_debug("%s: rxsr=%02x txsr=%02x rp=%04x\n",
+ dev->name, rxsr, inb(TX_STATUS), inw(RX_LOW));
+ /*
+ * Just reading rx_status fixes most errors.
+ */
+ if (rxsr & RX_MISSED)
+ dev->stats.rx_missed_errors++;
+ else if (rxsr & RX_RUNT) {
+ /* Handled to avoid board lock-up. */
+ dev->stats.rx_length_errors++;
+ if (el_debug > 5)
+ pr_debug("%s: runt.\n", dev->name);
+ } else if (rxsr & RX_GOOD) {
+ /*
+ * Receive worked.
+ */
+ el_receive(dev);
+ } else {
+ /*
+ * Nothing? Something is broken!
+ */
+ if (el_debug > 2)
+ pr_debug("%s: No packet seen, rxsr=%02x **resetting 3c501***\n",
+ dev->name, rxsr);
+ el_reset(dev);
+ }
+ }
+
+ /*
+ * Move into receive mode
+ */
+
+ outb(AX_RX, AX_CMD);
+ outw(0x00, RX_BUF_CLR);
+ inb(RX_STATUS); /* Be certain that interrupts are cleared. */
+ inb(TX_STATUS);
+ spin_unlock(&lp->lock);
+out:
+ return IRQ_HANDLED;
+}
+
+
+/**
+ * el_receive:
+ * @dev: Device to pull the packets from
+ *
+ * We have a good packet. Well, not really "good", just mostly not broken.
+ * We must check everything to see if it is good. In particular we occasionally
+ * get wild packet sizes from the card. If the packet seems sane we PIO it
+ * off the card and queue it for the protocol layers.
+ */
+
+static void el_receive(struct net_device *dev)
+{
+ int ioaddr = dev->base_addr;
+ int pkt_len;
+ struct sk_buff *skb;
+
+ pkt_len = inw(RX_LOW);
+
+ if (el_debug > 4)
+ pr_debug(" el_receive %d.\n", pkt_len);
+
+ if (pkt_len < 60 || pkt_len > 1536) {
+ if (el_debug)
+ pr_debug("%s: bogus packet, length=%d\n",
+ dev->name, pkt_len);
+ dev->stats.rx_over_errors++;
+ return;
+ }
+
+ /*
+ * Command mode so we can empty the buffer
+ */
+
+ outb(AX_SYS, AX_CMD);
+ skb = dev_alloc_skb(pkt_len+2);
+
+ /*
+ * Start of frame
+ */
+
+ outw(0x00, GP_LOW);
+ if (skb == NULL) {
+ pr_info("%s: Memory squeeze, dropping packet.\n", dev->name);
+ dev->stats.rx_dropped++;
+ return;
+ } else {
+ skb_reserve(skb, 2); /* Force 16 byte alignment */
+ /*
+ * The read increments through the bytes. The interrupt
+ * handler will fix the pointer when it returns to
+ * receive mode.
+ */
+ insb(DATAPORT, skb_put(skb, pkt_len), pkt_len);
+ skb->protocol = eth_type_trans(skb, dev);
+ netif_rx(skb);
+ dev->stats.rx_packets++;
+ dev->stats.rx_bytes += pkt_len;
+ }
+}
+
+/**
+ * el_reset: Reset a 3c501 card
+ * @dev: The 3c501 card about to get zapped
+ *
+ * Even resetting a 3c501 isn't simple. When you activate reset it loses all
+ * its configuration. You must hold the lock when doing this. The function
+ * cannot take the lock itself as it is callable from the irq handler.
+ */
+
+static void el_reset(struct net_device *dev)
+{
+ struct net_local *lp = netdev_priv(dev);
+ int ioaddr = dev->base_addr;
+
+ if (el_debug > 2)
+ pr_info("3c501 reset...\n");
+ outb(AX_RESET, AX_CMD); /* Reset the chip */
+ /* Aux control, irq and loopback enabled */
+ outb(AX_LOOP, AX_CMD);
+ {
+ int i;
+ for (i = 0; i < 6; i++) /* Set the station address. */
+ outb(dev->dev_addr[i], ioaddr + i);
+ }
+
+ outw(0, RX_BUF_CLR); /* Set rx packet area to 0. */
+ outb(TX_NORM, TX_CMD); /* tx irq on done, collision */
+ outb(RX_NORM, RX_CMD); /* Set Rx commands. */
+ inb(RX_STATUS); /* Clear status. */
+ inb(TX_STATUS);
+ lp->txing = 0;
+}
+
+/**
+ * el1_close:
+ * @dev: 3c501 card to shut down
+ *
+ * Close a 3c501 card. The IFF_UP flag has been cleared by the user via
+ * the SIOCSIFFLAGS ioctl. We stop any further transmissions being queued,
+ * and then disable the interrupts. Finally we reset the chip. The effects
+ * of the rest will be cleaned up by #el1_open. Always returns 0 indicating
+ * a success.
+ */
+
+static int el1_close(struct net_device *dev)
+{
+ int ioaddr = dev->base_addr;
+
+ if (el_debug > 2)
+ pr_info("%s: Shutting down Ethernet card at %#x.\n",
+ dev->name, ioaddr);
+
+ netif_stop_queue(dev);
+
+ /*
+ * Free and disable the IRQ.
+ */
+
+ free_irq(dev->irq, dev);
+ outb(AX_RESET, AX_CMD); /* Reset the chip */
+
+ return 0;
+}
+
+/**
+ * set_multicast_list:
+ * @dev: The device to adjust
+ *
+ * Set or clear the multicast filter for this adaptor to use the best-effort
+ * filtering supported. The 3c501 supports only three modes of filtering.
+ * It always receives broadcasts and packets for itself. You can choose to
+ * optionally receive all packets, or all multicast packets on top of this.
+ */
+
+static void set_multicast_list(struct net_device *dev)
+{
+ int ioaddr = dev->base_addr;
+
+ if (dev->flags & IFF_PROMISC) {
+ outb(RX_PROM, RX_CMD);
+ inb(RX_STATUS);
+ } else if (!netdev_mc_empty(dev) || dev->flags & IFF_ALLMULTI) {
+ /* Multicast or all multicast is the same */
+ outb(RX_MULT, RX_CMD);
+ inb(RX_STATUS); /* Clear status. */
+ } else {
+ outb(RX_NORM, RX_CMD);
+ inb(RX_STATUS);
+ }
+}
+
+
+static void netdev_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ strcpy(info->driver, DRV_NAME);
+ strcpy(info->version, DRV_VERSION);
+ sprintf(info->bus_info, "ISA 0x%lx", dev->base_addr);
+}
+
+static u32 netdev_get_msglevel(struct net_device *dev)
+{
+ return debug;
+}
+
+static void netdev_set_msglevel(struct net_device *dev, u32 level)
+{
+ debug = level;
+}
+
+static const struct ethtool_ops netdev_ethtool_ops = {
+ .get_drvinfo = netdev_get_drvinfo,
+ .get_msglevel = netdev_get_msglevel,
+ .set_msglevel = netdev_set_msglevel,
+};
+
+#ifdef MODULE
+
+static struct net_device *dev_3c501;
+
+module_param(io, int, 0);
+module_param(irq, int, 0);
+MODULE_PARM_DESC(io, "EtherLink I/O base address");
+MODULE_PARM_DESC(irq, "EtherLink IRQ number");
+
+/**
+ * init_module:
+ *
+ * When the driver is loaded as a module this function is called. We fake up
+ * a device structure with the base I/O and interrupt set as if it were being
+ * called from Space.c. This minimises the extra code that would otherwise
+ * be required.
+ *
+ * Returns 0 for success or -EIO if a card is not found. Returning an error
+ * here also causes the module to be unloaded
+ */
+
+int __init init_module(void)
+{
+ dev_3c501 = el1_probe(-1);
+ if (IS_ERR(dev_3c501))
+ return PTR_ERR(dev_3c501);
+ return 0;
+}
+
+/**
+ * cleanup_module:
+ *
+ * The module is being unloaded. We unhook our network device from the system
+ * and then free up the resources we took when the card was found.
+ */
+
+void __exit cleanup_module(void)
+{
+ struct net_device *dev = dev_3c501;
+ unregister_netdev(dev);
+ release_region(dev->base_addr, EL1_IO_EXTENT);
+ free_netdev(dev);
+}
+
+#endif /* MODULE */
+
+MODULE_AUTHOR("Donald Becker, Alan Cox");
+MODULE_DESCRIPTION("Support for the ancient 3Com 3c501 ethernet card");
+MODULE_LICENSE("GPL");
+
--- /dev/null
+
+/*
+ * Index to functions.
+ */
+
+static int el1_probe1(struct net_device *dev, int ioaddr);
+static int el_open(struct net_device *dev);
+static void el_timeout(struct net_device *dev);
+static netdev_tx_t el_start_xmit(struct sk_buff *skb, struct net_device *dev);
+static irqreturn_t el_interrupt(int irq, void *dev_id);
+static void el_receive(struct net_device *dev);
+static void el_reset(struct net_device *dev);
+static int el1_close(struct net_device *dev);
+static void set_multicast_list(struct net_device *dev);
+static const struct ethtool_ops netdev_ethtool_ops;
+
+#define EL1_IO_EXTENT 16
+
+#ifndef EL_DEBUG
+#define EL_DEBUG 0 /* use 0 for production, 1 for devel., >2 for debug */
+#endif /* Anything above 5 is wordy death! */
+#define debug el_debug
+static int el_debug = EL_DEBUG;
+
+/*
+ * Board-specific info in netdev_priv(dev).
+ */
+
+struct net_local
+{
+ int tx_pkt_start; /* The length of the current Tx packet. */
+ int collisions; /* Tx collisions this packet */
+ int loading; /* Spot buffer load collisions */
+ int txing; /* True if card is in TX mode */
+ spinlock_t lock; /* Serializing lock */
+};
+
+
+#define RX_STATUS (ioaddr + 0x06)
+#define RX_CMD RX_STATUS
+#define TX_STATUS (ioaddr + 0x07)
+#define TX_CMD TX_STATUS
+#define GP_LOW (ioaddr + 0x08)
+#define GP_HIGH (ioaddr + 0x09)
+#define RX_BUF_CLR (ioaddr + 0x0A)
+#define RX_LOW (ioaddr + 0x0A)
+#define RX_HIGH (ioaddr + 0x0B)
+#define SAPROM (ioaddr + 0x0C)
+#define AX_STATUS (ioaddr + 0x0E)
+#define AX_CMD AX_STATUS
+#define DATAPORT (ioaddr + 0x0F)
+#define TX_RDY 0x08 /* In TX_STATUS */
+
+#define EL1_DATAPTR 0x08
+#define EL1_RXPTR 0x0A
+#define EL1_SAPROM 0x0C
+#define EL1_DATAPORT 0x0f
+
+/*
+ * Writes to the ax command register.
+ */
+
+#define AX_OFF 0x00 /* Irq off, buffer access on */
+#define AX_SYS 0x40 /* Load the buffer */
+#define AX_XMIT 0x44 /* Transmit a packet */
+#define AX_RX 0x48 /* Receive a packet */
+#define AX_LOOP 0x0C /* Loopback mode */
+#define AX_RESET 0x80
+
+/*
+ * Normal receive mode written to RX_STATUS. We must intr on short packets
+ * to avoid bogus rx lockups.
+ */
+
+#define RX_NORM 0xA8 /* 0x68 == all addrs, 0xA8 only to me. */
+#define RX_PROM 0x68 /* Senior Prom, uhmm promiscuous mode. */
+#define RX_MULT 0xE8 /* Accept multicast packets. */
+#define TX_NORM 0x0A /* Interrupt on everything that might hang the chip */
+
+/*
+ * TX_STATUS register.
+ */
+
+#define TX_COLLISION 0x02
+#define TX_16COLLISIONS 0x04
+#define TX_READY 0x08
+
+#define RX_RUNT 0x08
+#define RX_MISSED 0x01 /* Missed a packet due to 3c501 braindamage. */
+#define RX_GOOD 0x30 /* Good packet 0x20, or simple overflow 0x10. */
+
--- /dev/null
+/* 3c509.c: A 3c509 EtherLink3 ethernet driver for linux. */
+/*
+ Written 1993-2000 by Donald Becker.
+
+ Copyright 1994-2000 by Donald Becker.
+ Copyright 1993 United States Government as represented by the
+ Director, National Security Agency. This software may be used and
+ distributed according to the terms of the GNU General Public License,
+ incorporated herein by reference.
+
+ This driver is for the 3Com EtherLinkIII series.
+
+ The author may be reached as becker@scyld.com, or C/O
+ Scyld Computing Corporation
+ 410 Severn Ave., Suite 210
+ Annapolis MD 21403
+
+ Known limitations:
+ Because of the way 3c509 ISA detection works it's difficult to predict
+ a priori which of several ISA-mode cards will be detected first.
+
+ This driver does not use predictive interrupt mode, resulting in higher
+ packet latency but lower overhead. If interrupts are disabled for an
+ unusually long time it could also result in missed packets, but in
+ practice this rarely happens.
+
+
+ FIXES:
+ Alan Cox: Removed the 'Unexpected interrupt' bug.
+ Michael Meskes: Upgraded to Donald Becker's version 1.07.
+ Alan Cox: Increased the eeprom delay. Regardless of
+ what the docs say some people definitely
+ get problems with lower (but in card spec)
+ delays
+ v1.10 4/21/97 Fixed module code so that multiple cards may be detected,
+ other cleanups. -djb
+ Andrea Arcangeli: Upgraded to Donald Becker's version 1.12.
+ Rick Payne: Fixed SMP race condition
+ v1.13 9/8/97 Made 'max_interrupt_work' an insmod-settable variable -djb
+ v1.14 10/15/97 Avoided waiting..discard message for fast machines -djb
+ v1.15 1/31/98 Faster recovery for Tx errors. -djb
+ v1.16 2/3/98 Different ID port handling to avoid sound cards. -djb
+ v1.18 12Mar2001 Andrew Morton
+ - Avoid bogus detect of 3c590's (Andrzej Krzysztofowicz)
+ - Reviewed against 1.18 from scyld.com
+ v1.18a 17Nov2001 Jeff Garzik <jgarzik@pobox.com>
+ - ethtool support
+ v1.18b 1Mar2002 Zwane Mwaikambo <zwane@commfireservices.com>
+ - Power Management support
+ v1.18c 1Mar2002 David Ruggiero <jdr@farfalle.com>
+ - Full duplex support
+ v1.19 16Oct2002 Zwane Mwaikambo <zwane@linuxpower.ca>
+ - Additional ethtool features
+ v1.19a 28Oct2002 Davud Ruggiero <jdr@farfalle.com>
+ - Increase *read_eeprom udelay to workaround oops with 2 cards.
+ v1.19b 08Nov2002 Marc Zyngier <maz@wild-wind.fr.eu.org>
+ - Introduce driver model for EISA cards.
+ v1.20 04Feb2008 Ondrej Zary <linux@rainbow-software.org>
+ - convert to isa_driver and pnp_driver and some cleanups
+*/
+
+#define DRV_NAME "3c509"
+#define DRV_VERSION "1.20"
+#define DRV_RELDATE "04Feb2008"
+
+/* A few values that may be tweaked. */
+
+/* Time in jiffies before concluding the transmitter is hung. */
+#define TX_TIMEOUT (400*HZ/1000)
+
+#include <linux/module.h>
+#include <linux/mca.h>
+#include <linux/isa.h>
+#include <linux/pnp.h>
+#include <linux/string.h>
+#include <linux/interrupt.h>
+#include <linux/errno.h>
+#include <linux/in.h>
+#include <linux/ioport.h>
+#include <linux/init.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/pm.h>
+#include <linux/skbuff.h>
+#include <linux/delay.h> /* for udelay() */
+#include <linux/spinlock.h>
+#include <linux/ethtool.h>
+#include <linux/device.h>
+#include <linux/eisa.h>
+#include <linux/bitops.h>
+
+#include <asm/uaccess.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+
+static char version[] __devinitdata = DRV_NAME ".c:" DRV_VERSION " " DRV_RELDATE " becker@scyld.com\n";
+
+#ifdef EL3_DEBUG
+static int el3_debug = EL3_DEBUG;
+#else
+static int el3_debug = 2;
+#endif
+
+/* Used to do a global count of all the cards in the system. Must be
+ * a global variable so that the mca/eisa probe routines can increment
+ * it */
+static int el3_cards = 0;
+#define EL3_MAX_CARDS 8
+
+/* To minimize the size of the driver source I only define operating
+ constants if they are used several times. You'll need the manual
+ anyway if you want to understand driver details. */
+/* Offsets from base I/O address. */
+#define EL3_DATA 0x00
+#define EL3_CMD 0x0e
+#define EL3_STATUS 0x0e
+#define EEPROM_READ 0x80
+
+#define EL3_IO_EXTENT 16
+
+#define EL3WINDOW(win_num) outw(SelectWindow + (win_num), ioaddr + EL3_CMD)
+
+
+/* The top five bits written to EL3_CMD are a command, the lower
+ 11 bits are the parameter, if applicable. */
+enum c509cmd {
+ TotalReset = 0<<11, SelectWindow = 1<<11, StartCoax = 2<<11,
+ RxDisable = 3<<11, RxEnable = 4<<11, RxReset = 5<<11, RxDiscard = 8<<11,
+ TxEnable = 9<<11, TxDisable = 10<<11, TxReset = 11<<11,
+ FakeIntr = 12<<11, AckIntr = 13<<11, SetIntrEnb = 14<<11,
+ SetStatusEnb = 15<<11, SetRxFilter = 16<<11, SetRxThreshold = 17<<11,
+ SetTxThreshold = 18<<11, SetTxStart = 19<<11, StatsEnable = 21<<11,
+ StatsDisable = 22<<11, StopCoax = 23<<11, PowerUp = 27<<11,
+ PowerDown = 28<<11, PowerAuto = 29<<11};
+
+enum c509status {
+ IntLatch = 0x0001, AdapterFailure = 0x0002, TxComplete = 0x0004,
+ TxAvailable = 0x0008, RxComplete = 0x0010, RxEarly = 0x0020,
+ IntReq = 0x0040, StatsFull = 0x0080, CmdBusy = 0x1000, };
+
+/* The SetRxFilter command accepts the following classes: */
+enum RxFilter {
+ RxStation = 1, RxMulticast = 2, RxBroadcast = 4, RxProm = 8 };
+
+/* Register window 1 offsets, the window used in normal operation. */
+#define TX_FIFO 0x00
+#define RX_FIFO 0x00
+#define RX_STATUS 0x08
+#define TX_STATUS 0x0B
+#define TX_FREE 0x0C /* Remaining free bytes in Tx buffer. */
+
+#define WN0_CONF_CTRL 0x04 /* Window 0: Configuration control register */
+#define WN0_ADDR_CONF 0x06 /* Window 0: Address configuration register */
+#define WN0_IRQ 0x08 /* Window 0: Set IRQ line in bits 12-15. */
+#define WN4_MEDIA 0x0A /* Window 4: Various transcvr/media bits. */
+#define MEDIA_TP 0x00C0 /* Enable link beat and jabber for 10baseT. */
+#define WN4_NETDIAG 0x06 /* Window 4: Net diagnostic */
+#define FD_ENABLE 0x8000 /* Enable full-duplex ("external loopback") */
+
+/*
+ * Must be a power of two (we use a binary and in the
+ * circular queue)
+ */
+#define SKB_QUEUE_SIZE 64
+
+enum el3_cardtype { EL3_ISA, EL3_PNP, EL3_MCA, EL3_EISA };
+
+struct el3_private {
+ spinlock_t lock;
+ /* skb send-queue */
+ int head, size;
+ struct sk_buff *queue[SKB_QUEUE_SIZE];
+ enum el3_cardtype type;
+};
+static int id_port;
+static int current_tag;
+static struct net_device *el3_devs[EL3_MAX_CARDS];
+
+/* Parameters that may be passed into the module. */
+static int debug = -1;
+static int irq[] = {-1, -1, -1, -1, -1, -1, -1, -1};
+/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
+static int max_interrupt_work = 10;
+#ifdef CONFIG_PNP
+static int nopnp;
+#endif
+
+static int __devinit el3_common_init(struct net_device *dev);
+static void el3_common_remove(struct net_device *dev);
+static ushort id_read_eeprom(int index);
+static ushort read_eeprom(int ioaddr, int index);
+static int el3_open(struct net_device *dev);
+static netdev_tx_t el3_start_xmit(struct sk_buff *skb, struct net_device *dev);
+static irqreturn_t el3_interrupt(int irq, void *dev_id);
+static void update_stats(struct net_device *dev);
+static struct net_device_stats *el3_get_stats(struct net_device *dev);
+static int el3_rx(struct net_device *dev);
+static int el3_close(struct net_device *dev);
+static void set_multicast_list(struct net_device *dev);
+static void el3_tx_timeout (struct net_device *dev);
+static void el3_down(struct net_device *dev);
+static void el3_up(struct net_device *dev);
+static const struct ethtool_ops ethtool_ops;
+#ifdef CONFIG_PM
+static int el3_suspend(struct device *, pm_message_t);
+static int el3_resume(struct device *);
+#else
+#define el3_suspend NULL
+#define el3_resume NULL
+#endif
+
+
+/* generic device remove for all device types */
+static int el3_device_remove (struct device *device);
+#ifdef CONFIG_NET_POLL_CONTROLLER
+static void el3_poll_controller(struct net_device *dev);
+#endif
+
+/* Return 0 on success, 1 on error, 2 when found already detected PnP card */
+static int el3_isa_id_sequence(__be16 *phys_addr)
+{
+ short lrs_state = 0xff;
+ int i;
+
+ /* ISA boards are detected by sending the ID sequence to the
+ ID_PORT. We find cards past the first by setting the 'current_tag'
+ on cards as they are found. Cards with their tag set will not
+ respond to subsequent ID sequences. */
+
+ outb(0x00, id_port);
+ outb(0x00, id_port);
+ for (i = 0; i < 255; i++) {
+ outb(lrs_state, id_port);
+ lrs_state <<= 1;
+ lrs_state = lrs_state & 0x100 ? lrs_state ^ 0xcf : lrs_state;
+ }
+ /* For the first probe, clear all board's tag registers. */
+ if (current_tag == 0)
+ outb(0xd0, id_port);
+ else /* Otherwise kill off already-found boards. */
+ outb(0xd8, id_port);
+ if (id_read_eeprom(7) != 0x6d50)
+ return 1;
+ /* Read in EEPROM data, which does contention-select.
+ Only the lowest address board will stay "on-line".
+ 3Com got the byte order backwards. */
+ for (i = 0; i < 3; i++)
+ phys_addr[i] = htons(id_read_eeprom(i));
+#ifdef CONFIG_PNP
+ if (!nopnp) {
+ /* The ISA PnP 3c509 cards respond to the ID sequence too.
+ This check is needed in order not to register them twice. */
+ for (i = 0; i < el3_cards; i++) {
+ struct el3_private *lp = netdev_priv(el3_devs[i]);
+ if (lp->type == EL3_PNP &&
+ !memcmp(phys_addr, el3_devs[i]->dev_addr,
+ ETH_ALEN)) {
+ if (el3_debug > 3)
+ pr_debug("3c509 with address %02x %02x %02x %02x %02x %02x was found by ISAPnP\n",
+ phys_addr[0] & 0xff, phys_addr[0] >> 8,
+ phys_addr[1] & 0xff, phys_addr[1] >> 8,
+ phys_addr[2] & 0xff, phys_addr[2] >> 8);
+ /* Set the adaptor tag so that the next card can be found. */
+ outb(0xd0 + ++current_tag, id_port);
+ return 2;
+ }
+ }
+ }
+#endif /* CONFIG_PNP */
+ return 0;
+
+}
+
+static void __devinit el3_dev_fill(struct net_device *dev, __be16 *phys_addr,
+ int ioaddr, int irq, int if_port,
+ enum el3_cardtype type)
+{
+ struct el3_private *lp = netdev_priv(dev);
+
+ memcpy(dev->dev_addr, phys_addr, ETH_ALEN);
+ dev->base_addr = ioaddr;
+ dev->irq = irq;
+ dev->if_port = if_port;
+ lp->type = type;
+}
+
+static int __devinit el3_isa_match(struct device *pdev,
+ unsigned int ndev)
+{
+ struct net_device *dev;
+ int ioaddr, isa_irq, if_port, err;
+ unsigned int iobase;
+ __be16 phys_addr[3];
+
+ while ((err = el3_isa_id_sequence(phys_addr)) == 2)
+ ; /* Skip to next card when PnP card found */
+ if (err == 1)
+ return 0;
+
+ iobase = id_read_eeprom(8);
+ if_port = iobase >> 14;
+ ioaddr = 0x200 + ((iobase & 0x1f) << 4);
+ if (irq[el3_cards] > 1 && irq[el3_cards] < 16)
+ isa_irq = irq[el3_cards];
+ else
+ isa_irq = id_read_eeprom(9) >> 12;
+
+ dev = alloc_etherdev(sizeof(struct el3_private));
+ if (!dev)
+ return -ENOMEM;
+
+ netdev_boot_setup_check(dev);
+
+ if (!request_region(ioaddr, EL3_IO_EXTENT, "3c509-isa")) {
+ free_netdev(dev);
+ return 0;
+ }
+
+ /* Set the adaptor tag so that the next card can be found. */
+ outb(0xd0 + ++current_tag, id_port);
+
+ /* Activate the adaptor at the EEPROM location. */
+ outb((ioaddr >> 4) | 0xe0, id_port);
+
+ EL3WINDOW(0);
+ if (inw(ioaddr) != 0x6d50) {
+ free_netdev(dev);
+ return 0;
+ }
+
+ /* Free the interrupt so that some other card can use it. */
+ outw(0x0f00, ioaddr + WN0_IRQ);
+
+ el3_dev_fill(dev, phys_addr, ioaddr, isa_irq, if_port, EL3_ISA);
+ dev_set_drvdata(pdev, dev);
+ if (el3_common_init(dev)) {
+ free_netdev(dev);
+ return 0;
+ }
+
+ el3_devs[el3_cards++] = dev;
+ return 1;
+}
+
+static int __devexit el3_isa_remove(struct device *pdev,
+ unsigned int ndev)
+{
+ el3_device_remove(pdev);
+ dev_set_drvdata(pdev, NULL);
+ return 0;
+}
+
+#ifdef CONFIG_PM
+static int el3_isa_suspend(struct device *dev, unsigned int n,
+ pm_message_t state)
+{
+ current_tag = 0;
+ return el3_suspend(dev, state);
+}
+
+static int el3_isa_resume(struct device *dev, unsigned int n)
+{
+ struct net_device *ndev = dev_get_drvdata(dev);
+ int ioaddr = ndev->base_addr, err;
+ __be16 phys_addr[3];
+
+ while ((err = el3_isa_id_sequence(phys_addr)) == 2)
+ ; /* Skip to next card when PnP card found */
+ if (err == 1)
+ return 0;
+ /* Set the adaptor tag so that the next card can be found. */
+ outb(0xd0 + ++current_tag, id_port);
+ /* Enable the card */
+ outb((ioaddr >> 4) | 0xe0, id_port);
+ EL3WINDOW(0);
+ if (inw(ioaddr) != 0x6d50)
+ return 1;
+ /* Free the interrupt so that some other card can use it. */
+ outw(0x0f00, ioaddr + WN0_IRQ);
+ return el3_resume(dev);
+}
+#endif
+
+static struct isa_driver el3_isa_driver = {
+ .match = el3_isa_match,
+ .remove = __devexit_p(el3_isa_remove),
+#ifdef CONFIG_PM
+ .suspend = el3_isa_suspend,
+ .resume = el3_isa_resume,
+#endif
+ .driver = {
+ .name = "3c509"
+ },
+};
+static int isa_registered;
+
+#ifdef CONFIG_PNP
+static struct pnp_device_id el3_pnp_ids[] = {
+ { .id = "TCM5090" }, /* 3Com Etherlink III (TP) */
+ { .id = "TCM5091" }, /* 3Com Etherlink III */
+ { .id = "TCM5094" }, /* 3Com Etherlink III (combo) */
+ { .id = "TCM5095" }, /* 3Com Etherlink III (TPO) */
+ { .id = "TCM5098" }, /* 3Com Etherlink III (TPC) */
+ { .id = "PNP80f7" }, /* 3Com Etherlink III compatible */
+ { .id = "PNP80f8" }, /* 3Com Etherlink III compatible */
+ { .id = "" }
+};
+MODULE_DEVICE_TABLE(pnp, el3_pnp_ids);
+
+static int __devinit el3_pnp_probe(struct pnp_dev *pdev,
+ const struct pnp_device_id *id)
+{
+ short i;
+ int ioaddr, irq, if_port;
+ __be16 phys_addr[3];
+ struct net_device *dev = NULL;
+ int err;
+
+ ioaddr = pnp_port_start(pdev, 0);
+ if (!request_region(ioaddr, EL3_IO_EXTENT, "3c509-pnp"))
+ return -EBUSY;
+ irq = pnp_irq(pdev, 0);
+ EL3WINDOW(0);
+ for (i = 0; i < 3; i++)
+ phys_addr[i] = htons(read_eeprom(ioaddr, i));
+ if_port = read_eeprom(ioaddr, 8) >> 14;
+ dev = alloc_etherdev(sizeof(struct el3_private));
+ if (!dev) {
+ release_region(ioaddr, EL3_IO_EXTENT);
+ return -ENOMEM;
+ }
+ SET_NETDEV_DEV(dev, &pdev->dev);
+ netdev_boot_setup_check(dev);
+
+ el3_dev_fill(dev, phys_addr, ioaddr, irq, if_port, EL3_PNP);
+ pnp_set_drvdata(pdev, dev);
+ err = el3_common_init(dev);
+
+ if (err) {
+ pnp_set_drvdata(pdev, NULL);
+ free_netdev(dev);
+ return err;
+ }
+
+ el3_devs[el3_cards++] = dev;
+ return 0;
+}
+
+static void __devexit el3_pnp_remove(struct pnp_dev *pdev)
+{
+ el3_common_remove(pnp_get_drvdata(pdev));
+ pnp_set_drvdata(pdev, NULL);
+}
+
+#ifdef CONFIG_PM
+static int el3_pnp_suspend(struct pnp_dev *pdev, pm_message_t state)
+{
+ return el3_suspend(&pdev->dev, state);
+}
+
+static int el3_pnp_resume(struct pnp_dev *pdev)
+{
+ return el3_resume(&pdev->dev);
+}
+#endif
+
+static struct pnp_driver el3_pnp_driver = {
+ .name = "3c509",
+ .id_table = el3_pnp_ids,
+ .probe = el3_pnp_probe,
+ .remove = __devexit_p(el3_pnp_remove),
+#ifdef CONFIG_PM
+ .suspend = el3_pnp_suspend,
+ .resume = el3_pnp_resume,
+#endif
+};
+static int pnp_registered;
+#endif /* CONFIG_PNP */
+
+#ifdef CONFIG_EISA
+static struct eisa_device_id el3_eisa_ids[] = {
+ { "TCM5090" },
+ { "TCM5091" },
+ { "TCM5092" },
+ { "TCM5093" },
+ { "TCM5094" },
+ { "TCM5095" },
+ { "TCM5098" },
+ { "" }
+};
+MODULE_DEVICE_TABLE(eisa, el3_eisa_ids);
+
+static int el3_eisa_probe (struct device *device);
+
+static struct eisa_driver el3_eisa_driver = {
+ .id_table = el3_eisa_ids,
+ .driver = {
+ .name = "3c579",
+ .probe = el3_eisa_probe,
+ .remove = __devexit_p (el3_device_remove),
+ .suspend = el3_suspend,
+ .resume = el3_resume,
+ }
+};
+static int eisa_registered;
+#endif
+
+#ifdef CONFIG_MCA
+static int el3_mca_probe(struct device *dev);
+
+static short el3_mca_adapter_ids[] __initdata = {
+ 0x627c,
+ 0x627d,
+ 0x62db,
+ 0x62f6,
+ 0x62f7,
+ 0x0000
+};
+
+static char *el3_mca_adapter_names[] __initdata = {
+ "3Com 3c529 EtherLink III (10base2)",
+ "3Com 3c529 EtherLink III (10baseT)",
+ "3Com 3c529 EtherLink III (test mode)",
+ "3Com 3c529 EtherLink III (TP or coax)",
+ "3Com 3c529 EtherLink III (TP)",
+ NULL
+};
+
+static struct mca_driver el3_mca_driver = {
+ .id_table = el3_mca_adapter_ids,
+ .driver = {
+ .name = "3c529",
+ .bus = &mca_bus_type,
+ .probe = el3_mca_probe,
+ .remove = __devexit_p(el3_device_remove),
+ .suspend = el3_suspend,
+ .resume = el3_resume,
+ },
+};
+static int mca_registered;
+#endif /* CONFIG_MCA */
+
+static const struct net_device_ops netdev_ops = {
+ .ndo_open = el3_open,
+ .ndo_stop = el3_close,
+ .ndo_start_xmit = el3_start_xmit,
+ .ndo_get_stats = el3_get_stats,
+ .ndo_set_multicast_list = set_multicast_list,
+ .ndo_tx_timeout = el3_tx_timeout,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = el3_poll_controller,
+#endif
+};
+
+static int __devinit el3_common_init(struct net_device *dev)
+{
+ struct el3_private *lp = netdev_priv(dev);
+ int err;
+ const char *if_names[] = {"10baseT", "AUI", "undefined", "BNC"};
+
+ spin_lock_init(&lp->lock);
+
+ if (dev->mem_start & 0x05) { /* xcvr codes 1/3/4/12 */
+ dev->if_port = (dev->mem_start & 0x0f);
+ } else { /* xcvr codes 0/8 */
+ /* use eeprom value, but save user's full-duplex selection */
+ dev->if_port |= (dev->mem_start & 0x08);
+ }
+
+ /* The EL3-specific entries in the device structure. */
+ dev->netdev_ops = &netdev_ops;
+ dev->watchdog_timeo = TX_TIMEOUT;
+ SET_ETHTOOL_OPS(dev, ðtool_ops);
+
+ err = register_netdev(dev);
+ if (err) {
+ pr_err("Failed to register 3c5x9 at %#3.3lx, IRQ %d.\n",
+ dev->base_addr, dev->irq);
+ release_region(dev->base_addr, EL3_IO_EXTENT);
+ return err;
+ }
+
+ pr_info("%s: 3c5x9 found at %#3.3lx, %s port, address %pM, IRQ %d.\n",
+ dev->name, dev->base_addr, if_names[(dev->if_port & 0x03)],
+ dev->dev_addr, dev->irq);
+
+ if (el3_debug > 0)
+ pr_info("%s", version);
+ return 0;
+
+}
+
+static void el3_common_remove (struct net_device *dev)
+{
+ unregister_netdev (dev);
+ release_region(dev->base_addr, EL3_IO_EXTENT);
+ free_netdev (dev);
+}
+
+#ifdef CONFIG_MCA
+static int __init el3_mca_probe(struct device *device)
+{
+ /* Based on Erik Nygren's (nygren@mit.edu) 3c529 patch,
+ * heavily modified by Chris Beauregard
+ * (cpbeaure@csclub.uwaterloo.ca) to support standard MCA
+ * probing.
+ *
+ * redone for multi-card detection by ZP Gu (zpg@castle.net)
+ * now works as a module */
+
+ short i;
+ int ioaddr, irq, if_port;
+ __be16 phys_addr[3];
+ struct net_device *dev = NULL;
+ u_char pos4, pos5;
+ struct mca_device *mdev = to_mca_device(device);
+ int slot = mdev->slot;
+ int err;
+
+ pos4 = mca_device_read_stored_pos(mdev, 4);
+ pos5 = mca_device_read_stored_pos(mdev, 5);
+
+ ioaddr = ((short)((pos4&0xfc)|0x02)) << 8;
+ irq = pos5 & 0x0f;
+
+
+ pr_info("3c529: found %s at slot %d\n",
+ el3_mca_adapter_names[mdev->index], slot + 1);
+
+ /* claim the slot */
+ strncpy(mdev->name, el3_mca_adapter_names[mdev->index],
+ sizeof(mdev->name));
+ mca_device_set_claim(mdev, 1);
+
+ if_port = pos4 & 0x03;
+
+ irq = mca_device_transform_irq(mdev, irq);
+ ioaddr = mca_device_transform_ioport(mdev, ioaddr);
+ if (el3_debug > 2) {
+ pr_debug("3c529: irq %d ioaddr 0x%x ifport %d\n", irq, ioaddr, if_port);
+ }
+ EL3WINDOW(0);
+ for (i = 0; i < 3; i++)
+ phys_addr[i] = htons(read_eeprom(ioaddr, i));
+
+ dev = alloc_etherdev(sizeof (struct el3_private));
+ if (dev == NULL) {
+ release_region(ioaddr, EL3_IO_EXTENT);
+ return -ENOMEM;
+ }
+
+ netdev_boot_setup_check(dev);
+
+ el3_dev_fill(dev, phys_addr, ioaddr, irq, if_port, EL3_MCA);
+ dev_set_drvdata(device, dev);
+ err = el3_common_init(dev);
+
+ if (err) {
+ dev_set_drvdata(device, NULL);
+ free_netdev(dev);
+ return -ENOMEM;
+ }
+
+ el3_devs[el3_cards++] = dev;
+ return 0;
+}
+
+#endif /* CONFIG_MCA */
+
+#ifdef CONFIG_EISA
+static int __init el3_eisa_probe (struct device *device)
+{
+ short i;
+ int ioaddr, irq, if_port;
+ __be16 phys_addr[3];
+ struct net_device *dev = NULL;
+ struct eisa_device *edev;
+ int err;
+
+ /* Yeepee, The driver framework is calling us ! */
+ edev = to_eisa_device (device);
+ ioaddr = edev->base_addr;
+
+ if (!request_region(ioaddr, EL3_IO_EXTENT, "3c579-eisa"))
+ return -EBUSY;
+
+ /* Change the register set to the configuration window 0. */
+ outw(SelectWindow | 0, ioaddr + 0xC80 + EL3_CMD);
+
+ irq = inw(ioaddr + WN0_IRQ) >> 12;
+ if_port = inw(ioaddr + 6)>>14;
+ for (i = 0; i < 3; i++)
+ phys_addr[i] = htons(read_eeprom(ioaddr, i));
+
+ /* Restore the "Product ID" to the EEPROM read register. */
+ read_eeprom(ioaddr, 3);
+
+ dev = alloc_etherdev(sizeof (struct el3_private));
+ if (dev == NULL) {
+ release_region(ioaddr, EL3_IO_EXTENT);
+ return -ENOMEM;
+ }
+
+ netdev_boot_setup_check(dev);
+
+ el3_dev_fill(dev, phys_addr, ioaddr, irq, if_port, EL3_EISA);
+ eisa_set_drvdata (edev, dev);
+ err = el3_common_init(dev);
+
+ if (err) {
+ eisa_set_drvdata (edev, NULL);
+ free_netdev(dev);
+ return err;
+ }
+
+ el3_devs[el3_cards++] = dev;
+ return 0;
+}
+#endif
+
+/* This remove works for all device types.
+ *
+ * The net dev must be stored in the driver data field */
+static int __devexit el3_device_remove (struct device *device)
+{
+ struct net_device *dev;
+
+ dev = dev_get_drvdata(device);
+
+ el3_common_remove (dev);
+ return 0;
+}
+
+/* Read a word from the EEPROM using the regular EEPROM access register.
+ Assume that we are in register window zero.
+ */
+static ushort read_eeprom(int ioaddr, int index)
+{
+ outw(EEPROM_READ + index, ioaddr + 10);
+ /* Pause for at least 162 us. for the read to take place.
+ Some chips seem to require much longer */
+ mdelay(2);
+ return inw(ioaddr + 12);
+}
+
+/* Read a word from the EEPROM when in the ISA ID probe state. */
+static ushort id_read_eeprom(int index)
+{
+ int bit, word = 0;
+
+ /* Issue read command, and pause for at least 162 us. for it to complete.
+ Assume extra-fast 16Mhz bus. */
+ outb(EEPROM_READ + index, id_port);
+
+ /* Pause for at least 162 us. for the read to take place. */
+ /* Some chips seem to require much longer */
+ mdelay(4);
+
+ for (bit = 15; bit >= 0; bit--)
+ word = (word << 1) + (inb(id_port) & 0x01);
+
+ if (el3_debug > 3)
+ pr_debug(" 3c509 EEPROM word %d %#4.4x.\n", index, word);
+
+ return word;
+}
+
+
+static int
+el3_open(struct net_device *dev)
+{
+ int ioaddr = dev->base_addr;
+ int i;
+
+ outw(TxReset, ioaddr + EL3_CMD);
+ outw(RxReset, ioaddr + EL3_CMD);
+ outw(SetStatusEnb | 0x00, ioaddr + EL3_CMD);
+
+ i = request_irq(dev->irq, el3_interrupt, 0, dev->name, dev);
+ if (i)
+ return i;
+
+ EL3WINDOW(0);
+ if (el3_debug > 3)
+ pr_debug("%s: Opening, IRQ %d status@%x %4.4x.\n", dev->name,
+ dev->irq, ioaddr + EL3_STATUS, inw(ioaddr + EL3_STATUS));
+
+ el3_up(dev);
+
+ if (el3_debug > 3)
+ pr_debug("%s: Opened 3c509 IRQ %d status %4.4x.\n",
+ dev->name, dev->irq, inw(ioaddr + EL3_STATUS));
+
+ return 0;
+}
+
+static void
+el3_tx_timeout (struct net_device *dev)
+{
+ int ioaddr = dev->base_addr;
+
+ /* Transmitter timeout, serious problems. */
+ pr_warning("%s: transmit timed out, Tx_status %2.2x status %4.4x Tx FIFO room %d.\n",
+ dev->name, inb(ioaddr + TX_STATUS), inw(ioaddr + EL3_STATUS),
+ inw(ioaddr + TX_FREE));
+ dev->stats.tx_errors++;
+ dev->trans_start = jiffies; /* prevent tx timeout */
+ /* Issue TX_RESET and TX_START commands. */
+ outw(TxReset, ioaddr + EL3_CMD);
+ outw(TxEnable, ioaddr + EL3_CMD);
+ netif_wake_queue(dev);
+}
+
+
+static netdev_tx_t
+el3_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct el3_private *lp = netdev_priv(dev);
+ int ioaddr = dev->base_addr;
+ unsigned long flags;
+
+ netif_stop_queue (dev);
+
+ dev->stats.tx_bytes += skb->len;
+
+ if (el3_debug > 4) {
+ pr_debug("%s: el3_start_xmit(length = %u) called, status %4.4x.\n",
+ dev->name, skb->len, inw(ioaddr + EL3_STATUS));
+ }
+#if 0
+#ifndef final_version
+ { /* Error-checking code, delete someday. */
+ ushort status = inw(ioaddr + EL3_STATUS);
+ if (status & 0x0001 && /* IRQ line active, missed one. */
+ inw(ioaddr + EL3_STATUS) & 1) { /* Make sure. */
+ pr_debug("%s: Missed interrupt, status then %04x now %04x"
+ " Tx %2.2x Rx %4.4x.\n", dev->name, status,
+ inw(ioaddr + EL3_STATUS), inb(ioaddr + TX_STATUS),
+ inw(ioaddr + RX_STATUS));
+ /* Fake interrupt trigger by masking, acknowledge interrupts. */
+ outw(SetStatusEnb | 0x00, ioaddr + EL3_CMD);
+ outw(AckIntr | IntLatch | TxAvailable | RxEarly | IntReq,
+ ioaddr + EL3_CMD);
+ outw(SetStatusEnb | 0xff, ioaddr + EL3_CMD);
+ }
+ }
+#endif
+#endif
+ /*
+ * We lock the driver against other processors. Note
+ * we don't need to lock versus the IRQ as we suspended
+ * that. This means that we lose the ability to take
+ * an RX during a TX upload. That sucks a bit with SMP
+ * on an original 3c509 (2K buffer)
+ *
+ * Using disable_irq stops us crapping on other
+ * time sensitive devices.
+ */
+
+ spin_lock_irqsave(&lp->lock, flags);
+
+ /* Put out the doubleword header... */
+ outw(skb->len, ioaddr + TX_FIFO);
+ outw(0x00, ioaddr + TX_FIFO);
+ /* ... and the packet rounded to a doubleword. */
+ outsl(ioaddr + TX_FIFO, skb->data, (skb->len + 3) >> 2);
+
+ if (inw(ioaddr + TX_FREE) > 1536)
+ netif_start_queue(dev);
+ else
+ /* Interrupt us when the FIFO has room for max-sized packet. */
+ outw(SetTxThreshold + 1536, ioaddr + EL3_CMD);
+
+ spin_unlock_irqrestore(&lp->lock, flags);
+
+ dev_kfree_skb (skb);
+
+ /* Clear the Tx status stack. */
+ {
+ short tx_status;
+ int i = 4;
+
+ while (--i > 0 && (tx_status = inb(ioaddr + TX_STATUS)) > 0) {
+ if (tx_status & 0x38) dev->stats.tx_aborted_errors++;
+ if (tx_status & 0x30) outw(TxReset, ioaddr + EL3_CMD);
+ if (tx_status & 0x3C) outw(TxEnable, ioaddr + EL3_CMD);
+ outb(0x00, ioaddr + TX_STATUS); /* Pop the status stack. */
+ }
+ }
+ return NETDEV_TX_OK;
+}
+
+/* The EL3 interrupt handler. */
+static irqreturn_t
+el3_interrupt(int irq, void *dev_id)
+{
+ struct net_device *dev = dev_id;
+ struct el3_private *lp;
+ int ioaddr, status;
+ int i = max_interrupt_work;
+
+ lp = netdev_priv(dev);
+ spin_lock(&lp->lock);
+
+ ioaddr = dev->base_addr;
+
+ if (el3_debug > 4) {
+ status = inw(ioaddr + EL3_STATUS);
+ pr_debug("%s: interrupt, status %4.4x.\n", dev->name, status);
+ }
+
+ while ((status = inw(ioaddr + EL3_STATUS)) &
+ (IntLatch | RxComplete | StatsFull)) {
+
+ if (status & RxComplete)
+ el3_rx(dev);
+
+ if (status & TxAvailable) {
+ if (el3_debug > 5)
+ pr_debug(" TX room bit was handled.\n");
+ /* There's room in the FIFO for a full-sized packet. */
+ outw(AckIntr | TxAvailable, ioaddr + EL3_CMD);
+ netif_wake_queue (dev);
+ }
+ if (status & (AdapterFailure | RxEarly | StatsFull | TxComplete)) {
+ /* Handle all uncommon interrupts. */
+ if (status & StatsFull) /* Empty statistics. */
+ update_stats(dev);
+ if (status & RxEarly) { /* Rx early is unused. */
+ el3_rx(dev);
+ outw(AckIntr | RxEarly, ioaddr + EL3_CMD);
+ }
+ if (status & TxComplete) { /* Really Tx error. */
+ short tx_status;
+ int i = 4;
+
+ while (--i>0 && (tx_status = inb(ioaddr + TX_STATUS)) > 0) {
+ if (tx_status & 0x38) dev->stats.tx_aborted_errors++;
+ if (tx_status & 0x30) outw(TxReset, ioaddr + EL3_CMD);
+ if (tx_status & 0x3C) outw(TxEnable, ioaddr + EL3_CMD);
+ outb(0x00, ioaddr + TX_STATUS); /* Pop the status stack. */
+ }
+ }
+ if (status & AdapterFailure) {
+ /* Adapter failure requires Rx reset and reinit. */
+ outw(RxReset, ioaddr + EL3_CMD);
+ /* Set the Rx filter to the current state. */
+ outw(SetRxFilter | RxStation | RxBroadcast
+ | (dev->flags & IFF_ALLMULTI ? RxMulticast : 0)
+ | (dev->flags & IFF_PROMISC ? RxProm : 0),
+ ioaddr + EL3_CMD);
+ outw(RxEnable, ioaddr + EL3_CMD); /* Re-enable the receiver. */
+ outw(AckIntr | AdapterFailure, ioaddr + EL3_CMD);
+ }
+ }
+
+ if (--i < 0) {
+ pr_err("%s: Infinite loop in interrupt, status %4.4x.\n",
+ dev->name, status);
+ /* Clear all interrupts. */
+ outw(AckIntr | 0xFF, ioaddr + EL3_CMD);
+ break;
+ }
+ /* Acknowledge the IRQ. */
+ outw(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD); /* Ack IRQ */
+ }
+
+ if (el3_debug > 4) {
+ pr_debug("%s: exiting interrupt, status %4.4x.\n", dev->name,
+ inw(ioaddr + EL3_STATUS));
+ }
+ spin_unlock(&lp->lock);
+ return IRQ_HANDLED;
+}
+
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+/*
+ * Polling receive - used by netconsole and other diagnostic tools
+ * to allow network i/o with interrupts disabled.
+ */
+static void el3_poll_controller(struct net_device *dev)
+{
+ disable_irq(dev->irq);
+ el3_interrupt(dev->irq, dev);
+ enable_irq(dev->irq);
+}
+#endif
+
+static struct net_device_stats *
+el3_get_stats(struct net_device *dev)
+{
+ struct el3_private *lp = netdev_priv(dev);
+ unsigned long flags;
+
+ /*
+ * This is fast enough not to bother with disable IRQ
+ * stuff.
+ */
+
+ spin_lock_irqsave(&lp->lock, flags);
+ update_stats(dev);
+ spin_unlock_irqrestore(&lp->lock, flags);
+ return &dev->stats;
+}
+
+/* Update statistics. We change to register window 6, so this should be run
+ single-threaded if the device is active. This is expected to be a rare
+ operation, and it's simpler for the rest of the driver to assume that
+ window 1 is always valid rather than use a special window-state variable.
+ */
+static void update_stats(struct net_device *dev)
+{
+ int ioaddr = dev->base_addr;
+
+ if (el3_debug > 5)
+ pr_debug(" Updating the statistics.\n");
+ /* Turn off statistics updates while reading. */
+ outw(StatsDisable, ioaddr + EL3_CMD);
+ /* Switch to the stats window, and read everything. */
+ EL3WINDOW(6);
+ dev->stats.tx_carrier_errors += inb(ioaddr + 0);
+ dev->stats.tx_heartbeat_errors += inb(ioaddr + 1);
+ /* Multiple collisions. */ inb(ioaddr + 2);
+ dev->stats.collisions += inb(ioaddr + 3);
+ dev->stats.tx_window_errors += inb(ioaddr + 4);
+ dev->stats.rx_fifo_errors += inb(ioaddr + 5);
+ dev->stats.tx_packets += inb(ioaddr + 6);
+ /* Rx packets */ inb(ioaddr + 7);
+ /* Tx deferrals */ inb(ioaddr + 8);
+ inw(ioaddr + 10); /* Total Rx and Tx octets. */
+ inw(ioaddr + 12);
+
+ /* Back to window 1, and turn statistics back on. */
+ EL3WINDOW(1);
+ outw(StatsEnable, ioaddr + EL3_CMD);
+}
+
+static int
+el3_rx(struct net_device *dev)
+{
+ int ioaddr = dev->base_addr;
+ short rx_status;
+
+ if (el3_debug > 5)
+ pr_debug(" In rx_packet(), status %4.4x, rx_status %4.4x.\n",
+ inw(ioaddr+EL3_STATUS), inw(ioaddr+RX_STATUS));
+ while ((rx_status = inw(ioaddr + RX_STATUS)) > 0) {
+ if (rx_status & 0x4000) { /* Error, update stats. */
+ short error = rx_status & 0x3800;
+
+ outw(RxDiscard, ioaddr + EL3_CMD);
+ dev->stats.rx_errors++;
+ switch (error) {
+ case 0x0000: dev->stats.rx_over_errors++; break;
+ case 0x0800: dev->stats.rx_length_errors++; break;
+ case 0x1000: dev->stats.rx_frame_errors++; break;
+ case 0x1800: dev->stats.rx_length_errors++; break;
+ case 0x2000: dev->stats.rx_frame_errors++; break;
+ case 0x2800: dev->stats.rx_crc_errors++; break;
+ }
+ } else {
+ short pkt_len = rx_status & 0x7ff;
+ struct sk_buff *skb;
+
+ skb = dev_alloc_skb(pkt_len+5);
+ if (el3_debug > 4)
+ pr_debug("Receiving packet size %d status %4.4x.\n",
+ pkt_len, rx_status);
+ if (skb != NULL) {
+ skb_reserve(skb, 2); /* Align IP on 16 byte */
+
+ /* 'skb->data' points to the start of sk_buff data area. */
+ insl(ioaddr + RX_FIFO, skb_put(skb,pkt_len),
+ (pkt_len + 3) >> 2);
+
+ outw(RxDiscard, ioaddr + EL3_CMD); /* Pop top Rx packet. */
+ skb->protocol = eth_type_trans(skb,dev);
+ netif_rx(skb);
+ dev->stats.rx_bytes += pkt_len;
+ dev->stats.rx_packets++;
+ continue;
+ }
+ outw(RxDiscard, ioaddr + EL3_CMD);
+ dev->stats.rx_dropped++;
+ if (el3_debug)
+ pr_debug("%s: Couldn't allocate a sk_buff of size %d.\n",
+ dev->name, pkt_len);
+ }
+ inw(ioaddr + EL3_STATUS); /* Delay. */
+ while (inw(ioaddr + EL3_STATUS) & 0x1000)
+ pr_debug(" Waiting for 3c509 to discard packet, status %x.\n",
+ inw(ioaddr + EL3_STATUS) );
+ }
+
+ return 0;
+}
+
+/*
+ * Set or clear the multicast filter for this adaptor.
+ */
+static void
+set_multicast_list(struct net_device *dev)
+{
+ unsigned long flags;
+ struct el3_private *lp = netdev_priv(dev);
+ int ioaddr = dev->base_addr;
+ int mc_count = netdev_mc_count(dev);
+
+ if (el3_debug > 1) {
+ static int old;
+ if (old != mc_count) {
+ old = mc_count;
+ pr_debug("%s: Setting Rx mode to %d addresses.\n",
+ dev->name, mc_count);
+ }
+ }
+ spin_lock_irqsave(&lp->lock, flags);
+ if (dev->flags&IFF_PROMISC) {
+ outw(SetRxFilter | RxStation | RxMulticast | RxBroadcast | RxProm,
+ ioaddr + EL3_CMD);
+ }
+ else if (mc_count || (dev->flags&IFF_ALLMULTI)) {
+ outw(SetRxFilter | RxStation | RxMulticast | RxBroadcast, ioaddr + EL3_CMD);
+ }
+ else
+ outw(SetRxFilter | RxStation | RxBroadcast, ioaddr + EL3_CMD);
+ spin_unlock_irqrestore(&lp->lock, flags);
+}
+
+static int
+el3_close(struct net_device *dev)
+{
+ int ioaddr = dev->base_addr;
+ struct el3_private *lp = netdev_priv(dev);
+
+ if (el3_debug > 2)
+ pr_debug("%s: Shutting down ethercard.\n", dev->name);
+
+ el3_down(dev);
+
+ free_irq(dev->irq, dev);
+ /* Switching back to window 0 disables the IRQ. */
+ EL3WINDOW(0);
+ if (lp->type != EL3_EISA) {
+ /* But we explicitly zero the IRQ line select anyway. Don't do
+ * it on EISA cards, it prevents the module from getting an
+ * IRQ after unload+reload... */
+ outw(0x0f00, ioaddr + WN0_IRQ);
+ }
+
+ return 0;
+}
+
+static int
+el3_link_ok(struct net_device *dev)
+{
+ int ioaddr = dev->base_addr;
+ u16 tmp;
+
+ EL3WINDOW(4);
+ tmp = inw(ioaddr + WN4_MEDIA);
+ EL3WINDOW(1);
+ return tmp & (1<<11);
+}
+
+static int
+el3_netdev_get_ecmd(struct net_device *dev, struct ethtool_cmd *ecmd)
+{
+ u16 tmp;
+ int ioaddr = dev->base_addr;
+
+ EL3WINDOW(0);
+ /* obtain current transceiver via WN4_MEDIA? */
+ tmp = inw(ioaddr + WN0_ADDR_CONF);
+ ecmd->transceiver = XCVR_INTERNAL;
+ switch (tmp >> 14) {
+ case 0:
+ ecmd->port = PORT_TP;
+ break;
+ case 1:
+ ecmd->port = PORT_AUI;
+ ecmd->transceiver = XCVR_EXTERNAL;
+ break;
+ case 3:
+ ecmd->port = PORT_BNC;
+ default:
+ break;
+ }
+
+ ecmd->duplex = DUPLEX_HALF;
+ ecmd->supported = 0;
+ tmp = inw(ioaddr + WN0_CONF_CTRL);
+ if (tmp & (1<<13))
+ ecmd->supported |= SUPPORTED_AUI;
+ if (tmp & (1<<12))
+ ecmd->supported |= SUPPORTED_BNC;
+ if (tmp & (1<<9)) {
+ ecmd->supported |= SUPPORTED_TP | SUPPORTED_10baseT_Half |
+ SUPPORTED_10baseT_Full; /* hmm... */
+ EL3WINDOW(4);
+ tmp = inw(ioaddr + WN4_NETDIAG);
+ if (tmp & FD_ENABLE)
+ ecmd->duplex = DUPLEX_FULL;
+ }
+
+ ethtool_cmd_speed_set(ecmd, SPEED_10);
+ EL3WINDOW(1);
+ return 0;
+}
+
+static int
+el3_netdev_set_ecmd(struct net_device *dev, struct ethtool_cmd *ecmd)
+{
+ u16 tmp;
+ int ioaddr = dev->base_addr;
+
+ if (ecmd->speed != SPEED_10)
+ return -EINVAL;
+ if ((ecmd->duplex != DUPLEX_HALF) && (ecmd->duplex != DUPLEX_FULL))
+ return -EINVAL;
+ if ((ecmd->transceiver != XCVR_INTERNAL) && (ecmd->transceiver != XCVR_EXTERNAL))
+ return -EINVAL;
+
+ /* change XCVR type */
+ EL3WINDOW(0);
+ tmp = inw(ioaddr + WN0_ADDR_CONF);
+ switch (ecmd->port) {
+ case PORT_TP:
+ tmp &= ~(3<<14);
+ dev->if_port = 0;
+ break;
+ case PORT_AUI:
+ tmp |= (1<<14);
+ dev->if_port = 1;
+ break;
+ case PORT_BNC:
+ tmp |= (3<<14);
+ dev->if_port = 3;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ outw(tmp, ioaddr + WN0_ADDR_CONF);
+ if (dev->if_port == 3) {
+ /* fire up the DC-DC convertor if BNC gets enabled */
+ tmp = inw(ioaddr + WN0_ADDR_CONF);
+ if (tmp & (3 << 14)) {
+ outw(StartCoax, ioaddr + EL3_CMD);
+ udelay(800);
+ } else
+ return -EIO;
+ }
+
+ EL3WINDOW(4);
+ tmp = inw(ioaddr + WN4_NETDIAG);
+ if (ecmd->duplex == DUPLEX_FULL)
+ tmp |= FD_ENABLE;
+ else
+ tmp &= ~FD_ENABLE;
+ outw(tmp, ioaddr + WN4_NETDIAG);
+ EL3WINDOW(1);
+
+ return 0;
+}
+
+static void el3_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
+{
+ strcpy(info->driver, DRV_NAME);
+ strcpy(info->version, DRV_VERSION);
+}
+
+static int el3_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
+{
+ struct el3_private *lp = netdev_priv(dev);
+ int ret;
+
+ spin_lock_irq(&lp->lock);
+ ret = el3_netdev_get_ecmd(dev, ecmd);
+ spin_unlock_irq(&lp->lock);
+ return ret;
+}
+
+static int el3_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
+{
+ struct el3_private *lp = netdev_priv(dev);
+ int ret;
+
+ spin_lock_irq(&lp->lock);
+ ret = el3_netdev_set_ecmd(dev, ecmd);
+ spin_unlock_irq(&lp->lock);
+ return ret;
+}
+
+static u32 el3_get_link(struct net_device *dev)
+{
+ struct el3_private *lp = netdev_priv(dev);
+ u32 ret;
+
+ spin_lock_irq(&lp->lock);
+ ret = el3_link_ok(dev);
+ spin_unlock_irq(&lp->lock);
+ return ret;
+}
+
+static u32 el3_get_msglevel(struct net_device *dev)
+{
+ return el3_debug;
+}
+
+static void el3_set_msglevel(struct net_device *dev, u32 v)
+{
+ el3_debug = v;
+}
+
+static const struct ethtool_ops ethtool_ops = {
+ .get_drvinfo = el3_get_drvinfo,
+ .get_settings = el3_get_settings,
+ .set_settings = el3_set_settings,
+ .get_link = el3_get_link,
+ .get_msglevel = el3_get_msglevel,
+ .set_msglevel = el3_set_msglevel,
+};
+
+static void
+el3_down(struct net_device *dev)
+{
+ int ioaddr = dev->base_addr;
+
+ netif_stop_queue(dev);
+
+ /* Turn off statistics ASAP. We update lp->stats below. */
+ outw(StatsDisable, ioaddr + EL3_CMD);
+
+ /* Disable the receiver and transmitter. */
+ outw(RxDisable, ioaddr + EL3_CMD);
+ outw(TxDisable, ioaddr + EL3_CMD);
+
+ if (dev->if_port == 3)
+ /* Turn off thinnet power. Green! */
+ outw(StopCoax, ioaddr + EL3_CMD);
+ else if (dev->if_port == 0) {
+ /* Disable link beat and jabber, if_port may change here next open(). */
+ EL3WINDOW(4);
+ outw(inw(ioaddr + WN4_MEDIA) & ~MEDIA_TP, ioaddr + WN4_MEDIA);
+ }
+
+ outw(SetIntrEnb | 0x0000, ioaddr + EL3_CMD);
+
+ update_stats(dev);
+}
+
+static void
+el3_up(struct net_device *dev)
+{
+ int i, sw_info, net_diag;
+ int ioaddr = dev->base_addr;
+
+ /* Activating the board required and does no harm otherwise */
+ outw(0x0001, ioaddr + 4);
+
+ /* Set the IRQ line. */
+ outw((dev->irq << 12) | 0x0f00, ioaddr + WN0_IRQ);
+
+ /* Set the station address in window 2 each time opened. */
+ EL3WINDOW(2);
+
+ for (i = 0; i < 6; i++)
+ outb(dev->dev_addr[i], ioaddr + i);
+
+ if ((dev->if_port & 0x03) == 3) /* BNC interface */
+ /* Start the thinnet transceiver. We should really wait 50ms...*/
+ outw(StartCoax, ioaddr + EL3_CMD);
+ else if ((dev->if_port & 0x03) == 0) { /* 10baseT interface */
+ /* Combine secondary sw_info word (the adapter level) and primary
+ sw_info word (duplex setting plus other useless bits) */
+ EL3WINDOW(0);
+ sw_info = (read_eeprom(ioaddr, 0x14) & 0x400f) |
+ (read_eeprom(ioaddr, 0x0d) & 0xBff0);
+
+ EL3WINDOW(4);
+ net_diag = inw(ioaddr + WN4_NETDIAG);
+ net_diag = (net_diag | FD_ENABLE); /* temporarily assume full-duplex will be set */
+ pr_info("%s: ", dev->name);
+ switch (dev->if_port & 0x0c) {
+ case 12:
+ /* force full-duplex mode if 3c5x9b */
+ if (sw_info & 0x000f) {
+ pr_cont("Forcing 3c5x9b full-duplex mode");
+ break;
+ }
+ case 8:
+ /* set full-duplex mode based on eeprom config setting */
+ if ((sw_info & 0x000f) && (sw_info & 0x8000)) {
+ pr_cont("Setting 3c5x9b full-duplex mode (from EEPROM configuration bit)");
+ break;
+ }
+ default:
+ /* xcvr=(0 || 4) OR user has an old 3c5x9 non "B" model */
+ pr_cont("Setting 3c5x9/3c5x9B half-duplex mode");
+ net_diag = (net_diag & ~FD_ENABLE); /* disable full duplex */
+ }
+
+ outw(net_diag, ioaddr + WN4_NETDIAG);
+ pr_cont(" if_port: %d, sw_info: %4.4x\n", dev->if_port, sw_info);
+ if (el3_debug > 3)
+ pr_debug("%s: 3c5x9 net diag word is now: %4.4x.\n", dev->name, net_diag);
+ /* Enable link beat and jabber check. */
+ outw(inw(ioaddr + WN4_MEDIA) | MEDIA_TP, ioaddr + WN4_MEDIA);
+ }
+
+ /* Switch to the stats window, and clear all stats by reading. */
+ outw(StatsDisable, ioaddr + EL3_CMD);
+ EL3WINDOW(6);
+ for (i = 0; i < 9; i++)
+ inb(ioaddr + i);
+ inw(ioaddr + 10);
+ inw(ioaddr + 12);
+
+ /* Switch to register set 1 for normal use. */
+ EL3WINDOW(1);
+
+ /* Accept b-case and phys addr only. */
+ outw(SetRxFilter | RxStation | RxBroadcast, ioaddr + EL3_CMD);
+ outw(StatsEnable, ioaddr + EL3_CMD); /* Turn on statistics. */
+
+ outw(RxEnable, ioaddr + EL3_CMD); /* Enable the receiver. */
+ outw(TxEnable, ioaddr + EL3_CMD); /* Enable transmitter. */
+ /* Allow status bits to be seen. */
+ outw(SetStatusEnb | 0xff, ioaddr + EL3_CMD);
+ /* Ack all pending events, and set active indicator mask. */
+ outw(AckIntr | IntLatch | TxAvailable | RxEarly | IntReq,
+ ioaddr + EL3_CMD);
+ outw(SetIntrEnb | IntLatch|TxAvailable|TxComplete|RxComplete|StatsFull,
+ ioaddr + EL3_CMD);
+
+ netif_start_queue(dev);
+}
+
+/* Power Management support functions */
+#ifdef CONFIG_PM
+
+static int
+el3_suspend(struct device *pdev, pm_message_t state)
+{
+ unsigned long flags;
+ struct net_device *dev;
+ struct el3_private *lp;
+ int ioaddr;
+
+ dev = dev_get_drvdata(pdev);
+ lp = netdev_priv(dev);
+ ioaddr = dev->base_addr;
+
+ spin_lock_irqsave(&lp->lock, flags);
+
+ if (netif_running(dev))
+ netif_device_detach(dev);
+
+ el3_down(dev);
+ outw(PowerDown, ioaddr + EL3_CMD);
+
+ spin_unlock_irqrestore(&lp->lock, flags);
+ return 0;
+}
+
+static int
+el3_resume(struct device *pdev)
+{
+ unsigned long flags;
+ struct net_device *dev;
+ struct el3_private *lp;
+ int ioaddr;
+
+ dev = dev_get_drvdata(pdev);
+ lp = netdev_priv(dev);
+ ioaddr = dev->base_addr;
+
+ spin_lock_irqsave(&lp->lock, flags);
+
+ outw(PowerUp, ioaddr + EL3_CMD);
+ EL3WINDOW(0);
+ el3_up(dev);
+
+ if (netif_running(dev))
+ netif_device_attach(dev);
+
+ spin_unlock_irqrestore(&lp->lock, flags);
+ return 0;
+}
+
+#endif /* CONFIG_PM */
+
+module_param(debug,int, 0);
+module_param_array(irq, int, NULL, 0);
+module_param(max_interrupt_work, int, 0);
+MODULE_PARM_DESC(debug, "debug level (0-6)");
+MODULE_PARM_DESC(irq, "IRQ number(s) (assigned)");
+MODULE_PARM_DESC(max_interrupt_work, "maximum events handled per interrupt");
+#ifdef CONFIG_PNP
+module_param(nopnp, int, 0);
+MODULE_PARM_DESC(nopnp, "disable ISA PnP support (0-1)");
+#endif /* CONFIG_PNP */
+MODULE_DESCRIPTION("3Com Etherlink III (3c509, 3c509B, 3c529, 3c579) ethernet driver");
+MODULE_LICENSE("GPL");
+
+static int __init el3_init_module(void)
+{
+ int ret = 0;
+
+ if (debug >= 0)
+ el3_debug = debug;
+
+#ifdef CONFIG_PNP
+ if (!nopnp) {
+ ret = pnp_register_driver(&el3_pnp_driver);
+ if (!ret)
+ pnp_registered = 1;
+ }
+#endif
+ /* Select an open I/O location at 0x1*0 to do ISA contention select. */
+ /* Start with 0x110 to avoid some sound cards.*/
+ for (id_port = 0x110 ; id_port < 0x200; id_port += 0x10) {
+ if (!request_region(id_port, 1, "3c509-control"))
+ continue;
+ outb(0x00, id_port);
+ outb(0xff, id_port);
+ if (inb(id_port) & 0x01)
+ break;
+ else
+ release_region(id_port, 1);
+ }
+ if (id_port >= 0x200) {
+ id_port = 0;
+ pr_err("No I/O port available for 3c509 activation.\n");
+ } else {
+ ret = isa_register_driver(&el3_isa_driver, EL3_MAX_CARDS);
+ if (!ret)
+ isa_registered = 1;
+ }
+#ifdef CONFIG_EISA
+ ret = eisa_driver_register(&el3_eisa_driver);
+ if (!ret)
+ eisa_registered = 1;
+#endif
+#ifdef CONFIG_MCA
+ ret = mca_register_driver(&el3_mca_driver);
+ if (!ret)
+ mca_registered = 1;
+#endif
+
+#ifdef CONFIG_PNP
+ if (pnp_registered)
+ ret = 0;
+#endif
+ if (isa_registered)
+ ret = 0;
+#ifdef CONFIG_EISA
+ if (eisa_registered)
+ ret = 0;
+#endif
+#ifdef CONFIG_MCA
+ if (mca_registered)
+ ret = 0;
+#endif
+ return ret;
+}
+
+static void __exit el3_cleanup_module(void)
+{
+#ifdef CONFIG_PNP
+ if (pnp_registered)
+ pnp_unregister_driver(&el3_pnp_driver);
+#endif
+ if (isa_registered)
+ isa_unregister_driver(&el3_isa_driver);
+ if (id_port)
+ release_region(id_port, 1);
+#ifdef CONFIG_EISA
+ if (eisa_registered)
+ eisa_driver_unregister(&el3_eisa_driver);
+#endif
+#ifdef CONFIG_MCA
+ if (mca_registered)
+ mca_unregister_driver(&el3_mca_driver);
+#endif
+}
+
+module_init (el3_init_module);
+module_exit (el3_cleanup_module);
--- /dev/null
+/*
+ Written 1997-1998 by Donald Becker.
+
+ This software may be used and distributed according to the terms
+ of the GNU General Public License, incorporated herein by reference.
+
+ This driver is for the 3Com ISA EtherLink XL "Corkscrew" 3c515 ethercard.
+
+ The author may be reached as becker@scyld.com, or C/O
+ Scyld Computing Corporation
+ 410 Severn Ave., Suite 210
+ Annapolis MD 21403
+
+
+ 2000/2/2- Added support for kernel-level ISAPnP
+ by Stephen Frost <sfrost@snowman.net> and Alessandro Zummo
+ Cleaned up for 2.3.x/softnet by Jeff Garzik and Alan Cox.
+
+ 2001/11/17 - Added ethtool support (jgarzik)
+
+ 2002/10/28 - Locking updates for 2.5 (alan@lxorguk.ukuu.org.uk)
+
+*/
+
+#define DRV_NAME "3c515"
+#define DRV_VERSION "0.99t-ac"
+#define DRV_RELDATE "28-Oct-2002"
+
+static char *version =
+DRV_NAME ".c:v" DRV_VERSION " " DRV_RELDATE " becker@scyld.com and others\n";
+
+#define CORKSCREW 1
+
+/* "Knobs" that adjust features and parameters. */
+/* Set the copy breakpoint for the copy-only-tiny-frames scheme.
+ Setting to > 1512 effectively disables this feature. */
+static int rx_copybreak = 200;
+
+/* Allow setting MTU to a larger size, bypassing the normal ethernet setup. */
+static const int mtu = 1500;
+
+/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
+static int max_interrupt_work = 20;
+
+/* Enable the automatic media selection code -- usually set. */
+#define AUTOMEDIA 1
+
+/* Allow the use of fragment bus master transfers instead of only
+ programmed-I/O for Vortex cards. Full-bus-master transfers are always
+ enabled by default on Boomerang cards. If VORTEX_BUS_MASTER is defined,
+ the feature may be turned on using 'options'. */
+#define VORTEX_BUS_MASTER
+
+/* A few values that may be tweaked. */
+/* Keep the ring sizes a power of two for efficiency. */
+#define TX_RING_SIZE 16
+#define RX_RING_SIZE 16
+#define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer. */
+
+#include <linux/module.h>
+#include <linux/isapnp.h>
+#include <linux/kernel.h>
+#include <linux/netdevice.h>
+#include <linux/string.h>
+#include <linux/errno.h>
+#include <linux/in.h>
+#include <linux/ioport.h>
+#include <linux/skbuff.h>
+#include <linux/etherdevice.h>
+#include <linux/interrupt.h>
+#include <linux/timer.h>
+#include <linux/ethtool.h>
+#include <linux/bitops.h>
+
+#include <asm/uaccess.h>
+#include <asm/io.h>
+#include <asm/dma.h>
+
+#define NEW_MULTICAST
+#include <linux/delay.h>
+
+#define MAX_UNITS 8
+
+MODULE_AUTHOR("Donald Becker <becker@scyld.com>");
+MODULE_DESCRIPTION("3Com 3c515 Corkscrew driver");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(DRV_VERSION);
+
+/* "Knobs" for adjusting internal parameters. */
+/* Put out somewhat more debugging messages. (0 - no msg, 1 minimal msgs). */
+#define DRIVER_DEBUG 1
+/* Some values here only for performance evaluation and path-coverage
+ debugging. */
+static int rx_nocopy, rx_copy, queued_packet;
+
+/* Number of times to check to see if the Tx FIFO has space, used in some
+ limited cases. */
+#define WAIT_TX_AVAIL 200
+
+/* Operational parameter that usually are not changed. */
+#define TX_TIMEOUT ((4*HZ)/10) /* Time in jiffies before concluding Tx hung */
+
+/* The size here is somewhat misleading: the Corkscrew also uses the ISA
+ aliased registers at <base>+0x400.
+ */
+#define CORKSCREW_TOTAL_SIZE 0x20
+
+#ifdef DRIVER_DEBUG
+static int corkscrew_debug = DRIVER_DEBUG;
+#else
+static int corkscrew_debug = 1;
+#endif
+
+#define CORKSCREW_ID 10
+
+/*
+ Theory of Operation
+
+I. Board Compatibility
+
+This device driver is designed for the 3Com 3c515 ISA Fast EtherLink XL,
+3Com's ISA bus adapter for Fast Ethernet. Due to the unique I/O port layout,
+it's not practical to integrate this driver with the other EtherLink drivers.
+
+II. Board-specific settings
+
+The Corkscrew has an EEPROM for configuration, but no special settings are
+needed for Linux.
+
+III. Driver operation
+
+The 3c515 series use an interface that's very similar to the 3c900 "Boomerang"
+PCI cards, with the bus master interface extensively modified to work with
+the ISA bus.
+
+The card is capable of full-bus-master transfers with separate
+lists of transmit and receive descriptors, similar to the AMD LANCE/PCnet,
+DEC Tulip and Intel Speedo3.
+
+This driver uses a "RX_COPYBREAK" scheme rather than a fixed intermediate
+receive buffer. This scheme allocates full-sized skbuffs as receive
+buffers. The value RX_COPYBREAK is used as the copying breakpoint: it is
+chosen to trade-off the memory wasted by passing the full-sized skbuff to
+the queue layer for all frames vs. the copying cost of copying a frame to a
+correctly-sized skbuff.
+
+
+IIIC. Synchronization
+The driver runs as two independent, single-threaded flows of control. One
+is the send-packet routine, which enforces single-threaded use by the netif
+layer. The other thread is the interrupt handler, which is single
+threaded by the hardware and other software.
+
+IV. Notes
+
+Thanks to Terry Murphy of 3Com for providing documentation and a development
+board.
+
+The names "Vortex", "Boomerang" and "Corkscrew" are the internal 3Com
+project names. I use these names to eliminate confusion -- 3Com product
+numbers and names are very similar and often confused.
+
+The new chips support both ethernet (1.5K) and FDDI (4.5K) frame sizes!
+This driver only supports ethernet frames because of the recent MTU limit
+of 1.5K, but the changes to support 4.5K are minimal.
+*/
+
+/* Operational definitions.
+ These are not used by other compilation units and thus are not
+ exported in a ".h" file.
+
+ First the windows. There are eight register windows, with the command
+ and status registers available in each.
+ */
+#define EL3WINDOW(win_num) outw(SelectWindow + (win_num), ioaddr + EL3_CMD)
+#define EL3_CMD 0x0e
+#define EL3_STATUS 0x0e
+
+/* The top five bits written to EL3_CMD are a command, the lower
+ 11 bits are the parameter, if applicable.
+ Note that 11 parameters bits was fine for ethernet, but the new chips
+ can handle FDDI length frames (~4500 octets) and now parameters count
+ 32-bit 'Dwords' rather than octets. */
+
+enum corkscrew_cmd {
+ TotalReset = 0 << 11, SelectWindow = 1 << 11, StartCoax = 2 << 11,
+ RxDisable = 3 << 11, RxEnable = 4 << 11, RxReset = 5 << 11,
+ UpStall = 6 << 11, UpUnstall = (6 << 11) + 1, DownStall = (6 << 11) + 2,
+ DownUnstall = (6 << 11) + 3, RxDiscard = 8 << 11, TxEnable = 9 << 11,
+ TxDisable = 10 << 11, TxReset = 11 << 11, FakeIntr = 12 << 11,
+ AckIntr = 13 << 11, SetIntrEnb = 14 << 11, SetStatusEnb = 15 << 11,
+ SetRxFilter = 16 << 11, SetRxThreshold = 17 << 11,
+ SetTxThreshold = 18 << 11, SetTxStart = 19 << 11, StartDMAUp = 20 << 11,
+ StartDMADown = (20 << 11) + 1, StatsEnable = 21 << 11,
+ StatsDisable = 22 << 11, StopCoax = 23 << 11,
+};
+
+/* The SetRxFilter command accepts the following classes: */
+enum RxFilter {
+ RxStation = 1, RxMulticast = 2, RxBroadcast = 4, RxProm = 8
+};
+
+/* Bits in the general status register. */
+enum corkscrew_status {
+ IntLatch = 0x0001, AdapterFailure = 0x0002, TxComplete = 0x0004,
+ TxAvailable = 0x0008, RxComplete = 0x0010, RxEarly = 0x0020,
+ IntReq = 0x0040, StatsFull = 0x0080,
+ DMADone = 1 << 8, DownComplete = 1 << 9, UpComplete = 1 << 10,
+ DMAInProgress = 1 << 11, /* DMA controller is still busy. */
+ CmdInProgress = 1 << 12, /* EL3_CMD is still busy. */
+};
+
+/* Register window 1 offsets, the window used in normal operation.
+ On the Corkscrew this window is always mapped at offsets 0x10-0x1f. */
+enum Window1 {
+ TX_FIFO = 0x10, RX_FIFO = 0x10, RxErrors = 0x14,
+ RxStatus = 0x18, Timer = 0x1A, TxStatus = 0x1B,
+ TxFree = 0x1C, /* Remaining free bytes in Tx buffer. */
+};
+enum Window0 {
+ Wn0IRQ = 0x08,
+#if defined(CORKSCREW)
+ Wn0EepromCmd = 0x200A, /* Corkscrew EEPROM command register. */
+ Wn0EepromData = 0x200C, /* Corkscrew EEPROM results register. */
+#else
+ Wn0EepromCmd = 10, /* Window 0: EEPROM command register. */
+ Wn0EepromData = 12, /* Window 0: EEPROM results register. */
+#endif
+};
+enum Win0_EEPROM_bits {
+ EEPROM_Read = 0x80, EEPROM_WRITE = 0x40, EEPROM_ERASE = 0xC0,
+ EEPROM_EWENB = 0x30, /* Enable erasing/writing for 10 msec. */
+ EEPROM_EWDIS = 0x00, /* Disable EWENB before 10 msec timeout. */
+};
+
+/* EEPROM locations. */
+enum eeprom_offset {
+ PhysAddr01 = 0, PhysAddr23 = 1, PhysAddr45 = 2, ModelID = 3,
+ EtherLink3ID = 7,
+};
+
+enum Window3 { /* Window 3: MAC/config bits. */
+ Wn3_Config = 0, Wn3_MAC_Ctrl = 6, Wn3_Options = 8,
+};
+enum wn3_config {
+ Ram_size = 7,
+ Ram_width = 8,
+ Ram_speed = 0x30,
+ Rom_size = 0xc0,
+ Ram_split_shift = 16,
+ Ram_split = 3 << Ram_split_shift,
+ Xcvr_shift = 20,
+ Xcvr = 7 << Xcvr_shift,
+ Autoselect = 0x1000000,
+};
+
+enum Window4 {
+ Wn4_NetDiag = 6, Wn4_Media = 10, /* Window 4: Xcvr/media bits. */
+};
+enum Win4_Media_bits {
+ Media_SQE = 0x0008, /* Enable SQE error counting for AUI. */
+ Media_10TP = 0x00C0, /* Enable link beat and jabber for 10baseT. */
+ Media_Lnk = 0x0080, /* Enable just link beat for 100TX/100FX. */
+ Media_LnkBeat = 0x0800,
+};
+enum Window7 { /* Window 7: Bus Master control. */
+ Wn7_MasterAddr = 0, Wn7_MasterLen = 6, Wn7_MasterStatus = 12,
+};
+
+/* Boomerang-style bus master control registers. Note ISA aliases! */
+enum MasterCtrl {
+ PktStatus = 0x400, DownListPtr = 0x404, FragAddr = 0x408, FragLen =
+ 0x40c,
+ TxFreeThreshold = 0x40f, UpPktStatus = 0x410, UpListPtr = 0x418,
+};
+
+/* The Rx and Tx descriptor lists.
+ Caution Alpha hackers: these types are 32 bits! Note also the 8 byte
+ alignment contraint on tx_ring[] and rx_ring[]. */
+struct boom_rx_desc {
+ u32 next;
+ s32 status;
+ u32 addr;
+ s32 length;
+};
+
+/* Values for the Rx status entry. */
+enum rx_desc_status {
+ RxDComplete = 0x00008000, RxDError = 0x4000,
+ /* See boomerang_rx() for actual error bits */
+};
+
+struct boom_tx_desc {
+ u32 next;
+ s32 status;
+ u32 addr;
+ s32 length;
+};
+
+struct corkscrew_private {
+ const char *product_name;
+ struct list_head list;
+ struct net_device *our_dev;
+ /* The Rx and Tx rings are here to keep them quad-word-aligned. */
+ struct boom_rx_desc rx_ring[RX_RING_SIZE];
+ struct boom_tx_desc tx_ring[TX_RING_SIZE];
+ /* The addresses of transmit- and receive-in-place skbuffs. */
+ struct sk_buff *rx_skbuff[RX_RING_SIZE];
+ struct sk_buff *tx_skbuff[TX_RING_SIZE];
+ unsigned int cur_rx, cur_tx; /* The next free ring entry */
+ unsigned int dirty_rx, dirty_tx;/* The ring entries to be free()ed. */
+ struct sk_buff *tx_skb; /* Packet being eaten by bus master ctrl. */
+ struct timer_list timer; /* Media selection timer. */
+ int capabilities ; /* Adapter capabilities word. */
+ int options; /* User-settable misc. driver options. */
+ int last_rx_packets; /* For media autoselection. */
+ unsigned int available_media:8, /* From Wn3_Options */
+ media_override:3, /* Passed-in media type. */
+ default_media:3, /* Read from the EEPROM. */
+ full_duplex:1, autoselect:1, bus_master:1, /* Vortex can only do a fragment bus-m. */
+ full_bus_master_tx:1, full_bus_master_rx:1, /* Boomerang */
+ tx_full:1;
+ spinlock_t lock;
+ struct device *dev;
+};
+
+/* The action to take with a media selection timer tick.
+ Note that we deviate from the 3Com order by checking 10base2 before AUI.
+ */
+enum xcvr_types {
+ XCVR_10baseT = 0, XCVR_AUI, XCVR_10baseTOnly, XCVR_10base2, XCVR_100baseTx,
+ XCVR_100baseFx, XCVR_MII = 6, XCVR_Default = 8,
+};
+
+static struct media_table {
+ char *name;
+ unsigned int media_bits:16, /* Bits to set in Wn4_Media register. */
+ mask:8, /* The transceiver-present bit in Wn3_Config. */
+ next:8; /* The media type to try next. */
+ short wait; /* Time before we check media status. */
+} media_tbl[] = {
+ { "10baseT", Media_10TP, 0x08, XCVR_10base2, (14 * HZ) / 10 },
+ { "10Mbs AUI", Media_SQE, 0x20, XCVR_Default, (1 * HZ) / 10},
+ { "undefined", 0, 0x80, XCVR_10baseT, 10000},
+ { "10base2", 0, 0x10, XCVR_AUI, (1 * HZ) / 10},
+ { "100baseTX", Media_Lnk, 0x02, XCVR_100baseFx, (14 * HZ) / 10},
+ { "100baseFX", Media_Lnk, 0x04, XCVR_MII, (14 * HZ) / 10},
+ { "MII", 0, 0x40, XCVR_10baseT, 3 * HZ},
+ { "undefined", 0, 0x01, XCVR_10baseT, 10000},
+ { "Default", 0, 0xFF, XCVR_10baseT, 10000},
+};
+
+#ifdef __ISAPNP__
+static struct isapnp_device_id corkscrew_isapnp_adapters[] = {
+ { ISAPNP_ANY_ID, ISAPNP_ANY_ID,
+ ISAPNP_VENDOR('T', 'C', 'M'), ISAPNP_FUNCTION(0x5051),
+ (long) "3Com Fast EtherLink ISA" },
+ { } /* terminate list */
+};
+
+MODULE_DEVICE_TABLE(isapnp, corkscrew_isapnp_adapters);
+
+static int nopnp;
+#endif /* __ISAPNP__ */
+
+static struct net_device *corkscrew_scan(int unit);
+static int corkscrew_setup(struct net_device *dev, int ioaddr,
+ struct pnp_dev *idev, int card_number);
+static int corkscrew_open(struct net_device *dev);
+static void corkscrew_timer(unsigned long arg);
+static netdev_tx_t corkscrew_start_xmit(struct sk_buff *skb,
+ struct net_device *dev);
+static int corkscrew_rx(struct net_device *dev);
+static void corkscrew_timeout(struct net_device *dev);
+static int boomerang_rx(struct net_device *dev);
+static irqreturn_t corkscrew_interrupt(int irq, void *dev_id);
+static int corkscrew_close(struct net_device *dev);
+static void update_stats(int addr, struct net_device *dev);
+static struct net_device_stats *corkscrew_get_stats(struct net_device *dev);
+static void set_rx_mode(struct net_device *dev);
+static const struct ethtool_ops netdev_ethtool_ops;
+
+
+/*
+ Unfortunately maximizing the shared code between the integrated and
+ module version of the driver results in a complicated set of initialization
+ procedures.
+ init_module() -- modules / tc59x_init() -- built-in
+ The wrappers for corkscrew_scan()
+ corkscrew_scan() The common routine that scans for PCI and EISA cards
+ corkscrew_found_device() Allocate a device structure when we find a card.
+ Different versions exist for modules and built-in.
+ corkscrew_probe1() Fill in the device structure -- this is separated
+ so that the modules code can put it in dev->init.
+*/
+/* This driver uses 'options' to pass the media type, full-duplex flag, etc. */
+/* Note: this is the only limit on the number of cards supported!! */
+static int options[MAX_UNITS] = { -1, -1, -1, -1, -1, -1, -1, -1, };
+
+#ifdef MODULE
+static int debug = -1;
+
+module_param(debug, int, 0);
+module_param_array(options, int, NULL, 0);
+module_param(rx_copybreak, int, 0);
+module_param(max_interrupt_work, int, 0);
+MODULE_PARM_DESC(debug, "3c515 debug level (0-6)");
+MODULE_PARM_DESC(options, "3c515: Bits 0-2: media type, bit 3: full duplex, bit 4: bus mastering");
+MODULE_PARM_DESC(rx_copybreak, "3c515 copy breakpoint for copy-only-tiny-frames");
+MODULE_PARM_DESC(max_interrupt_work, "3c515 maximum events handled per interrupt");
+
+/* A list of all installed Vortex devices, for removing the driver module. */
+/* we will need locking (and refcounting) if we ever use it for more */
+static LIST_HEAD(root_corkscrew_dev);
+
+int init_module(void)
+{
+ int found = 0;
+ if (debug >= 0)
+ corkscrew_debug = debug;
+ if (corkscrew_debug)
+ pr_debug("%s", version);
+ while (corkscrew_scan(-1))
+ found++;
+ return found ? 0 : -ENODEV;
+}
+
+#else
+struct net_device *tc515_probe(int unit)
+{
+ struct net_device *dev = corkscrew_scan(unit);
+ static int printed;
+
+ if (!dev)
+ return ERR_PTR(-ENODEV);
+
+ if (corkscrew_debug > 0 && !printed) {
+ printed = 1;
+ pr_debug("%s", version);
+ }
+
+ return dev;
+}
+#endif /* not MODULE */
+
+static int check_device(unsigned ioaddr)
+{
+ int timer;
+
+ if (!request_region(ioaddr, CORKSCREW_TOTAL_SIZE, "3c515"))
+ return 0;
+ /* Check the resource configuration for a matching ioaddr. */
+ if ((inw(ioaddr + 0x2002) & 0x1f0) != (ioaddr & 0x1f0)) {
+ release_region(ioaddr, CORKSCREW_TOTAL_SIZE);
+ return 0;
+ }
+ /* Verify by reading the device ID from the EEPROM. */
+ outw(EEPROM_Read + 7, ioaddr + Wn0EepromCmd);
+ /* Pause for at least 162 us. for the read to take place. */
+ for (timer = 4; timer >= 0; timer--) {
+ udelay(162);
+ if ((inw(ioaddr + Wn0EepromCmd) & 0x0200) == 0)
+ break;
+ }
+ if (inw(ioaddr + Wn0EepromData) != 0x6d50) {
+ release_region(ioaddr, CORKSCREW_TOTAL_SIZE);
+ return 0;
+ }
+ return 1;
+}
+
+static void cleanup_card(struct net_device *dev)
+{
+ struct corkscrew_private *vp = netdev_priv(dev);
+ list_del_init(&vp->list);
+ if (dev->dma)
+ free_dma(dev->dma);
+ outw(TotalReset, dev->base_addr + EL3_CMD);
+ release_region(dev->base_addr, CORKSCREW_TOTAL_SIZE);
+ if (vp->dev)
+ pnp_device_detach(to_pnp_dev(vp->dev));
+}
+
+static struct net_device *corkscrew_scan(int unit)
+{
+ struct net_device *dev;
+ static int cards_found = 0;
+ static int ioaddr;
+ int err;
+#ifdef __ISAPNP__
+ short i;
+ static int pnp_cards;
+#endif
+
+ dev = alloc_etherdev(sizeof(struct corkscrew_private));
+ if (!dev)
+ return ERR_PTR(-ENOMEM);
+
+ if (unit >= 0) {
+ sprintf(dev->name, "eth%d", unit);
+ netdev_boot_setup_check(dev);
+ }
+
+#ifdef __ISAPNP__
+ if(nopnp == 1)
+ goto no_pnp;
+ for(i=0; corkscrew_isapnp_adapters[i].vendor != 0; i++) {
+ struct pnp_dev *idev = NULL;
+ int irq;
+ while((idev = pnp_find_dev(NULL,
+ corkscrew_isapnp_adapters[i].vendor,
+ corkscrew_isapnp_adapters[i].function,
+ idev))) {
+
+ if (pnp_device_attach(idev) < 0)
+ continue;
+ if (pnp_activate_dev(idev) < 0) {
+ pr_warning("pnp activate failed (out of resources?)\n");
+ pnp_device_detach(idev);
+ continue;
+ }
+ if (!pnp_port_valid(idev, 0) || !pnp_irq_valid(idev, 0)) {
+ pnp_device_detach(idev);
+ continue;
+ }
+ ioaddr = pnp_port_start(idev, 0);
+ irq = pnp_irq(idev, 0);
+ if (!check_device(ioaddr)) {
+ pnp_device_detach(idev);
+ continue;
+ }
+ if(corkscrew_debug)
+ pr_debug("ISAPNP reports %s at i/o 0x%x, irq %d\n",
+ (char*) corkscrew_isapnp_adapters[i].driver_data, ioaddr, irq);
+ pr_info("3c515 Resource configuration register %#4.4x, DCR %4.4x.\n",
+ inl(ioaddr + 0x2002), inw(ioaddr + 0x2000));
+ /* irq = inw(ioaddr + 0x2002) & 15; */ /* Use the irq from isapnp */
+ SET_NETDEV_DEV(dev, &idev->dev);
+ pnp_cards++;
+ err = corkscrew_setup(dev, ioaddr, idev, cards_found++);
+ if (!err)
+ return dev;
+ cleanup_card(dev);
+ }
+ }
+no_pnp:
+#endif /* __ISAPNP__ */
+
+ /* Check all locations on the ISA bus -- evil! */
+ for (ioaddr = 0x100; ioaddr < 0x400; ioaddr += 0x20) {
+ if (!check_device(ioaddr))
+ continue;
+
+ pr_info("3c515 Resource configuration register %#4.4x, DCR %4.4x.\n",
+ inl(ioaddr + 0x2002), inw(ioaddr + 0x2000));
+ err = corkscrew_setup(dev, ioaddr, NULL, cards_found++);
+ if (!err)
+ return dev;
+ cleanup_card(dev);
+ }
+ free_netdev(dev);
+ return NULL;
+}
+
+
+static const struct net_device_ops netdev_ops = {
+ .ndo_open = corkscrew_open,
+ .ndo_stop = corkscrew_close,
+ .ndo_start_xmit = corkscrew_start_xmit,
+ .ndo_tx_timeout = corkscrew_timeout,
+ .ndo_get_stats = corkscrew_get_stats,
+ .ndo_set_multicast_list = set_rx_mode,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+};
+
+
+static int corkscrew_setup(struct net_device *dev, int ioaddr,
+ struct pnp_dev *idev, int card_number)
+{
+ struct corkscrew_private *vp = netdev_priv(dev);
+ unsigned int eeprom[0x40], checksum = 0; /* EEPROM contents */
+ int i;
+ int irq;
+
+#ifdef __ISAPNP__
+ if (idev) {
+ irq = pnp_irq(idev, 0);
+ vp->dev = &idev->dev;
+ } else {
+ irq = inw(ioaddr + 0x2002) & 15;
+ }
+#else
+ irq = inw(ioaddr + 0x2002) & 15;
+#endif
+
+ dev->base_addr = ioaddr;
+ dev->irq = irq;
+ dev->dma = inw(ioaddr + 0x2000) & 7;
+ vp->product_name = "3c515";
+ vp->options = dev->mem_start;
+ vp->our_dev = dev;
+
+ if (!vp->options) {
+ if (card_number >= MAX_UNITS)
+ vp->options = -1;
+ else
+ vp->options = options[card_number];
+ }
+
+ if (vp->options >= 0) {
+ vp->media_override = vp->options & 7;
+ if (vp->media_override == 2)
+ vp->media_override = 0;
+ vp->full_duplex = (vp->options & 8) ? 1 : 0;
+ vp->bus_master = (vp->options & 16) ? 1 : 0;
+ } else {
+ vp->media_override = 7;
+ vp->full_duplex = 0;
+ vp->bus_master = 0;
+ }
+#ifdef MODULE
+ list_add(&vp->list, &root_corkscrew_dev);
+#endif
+
+ pr_info("%s: 3Com %s at %#3x,", dev->name, vp->product_name, ioaddr);
+
+ spin_lock_init(&vp->lock);
+
+ /* Read the station address from the EEPROM. */
+ EL3WINDOW(0);
+ for (i = 0; i < 0x18; i++) {
+ __be16 *phys_addr = (__be16 *) dev->dev_addr;
+ int timer;
+ outw(EEPROM_Read + i, ioaddr + Wn0EepromCmd);
+ /* Pause for at least 162 us. for the read to take place. */
+ for (timer = 4; timer >= 0; timer--) {
+ udelay(162);
+ if ((inw(ioaddr + Wn0EepromCmd) & 0x0200) == 0)
+ break;
+ }
+ eeprom[i] = inw(ioaddr + Wn0EepromData);
+ checksum ^= eeprom[i];
+ if (i < 3)
+ phys_addr[i] = htons(eeprom[i]);
+ }
+ checksum = (checksum ^ (checksum >> 8)) & 0xff;
+ if (checksum != 0x00)
+ pr_cont(" ***INVALID CHECKSUM %4.4x*** ", checksum);
+ pr_cont(" %pM", dev->dev_addr);
+ if (eeprom[16] == 0x11c7) { /* Corkscrew */
+ if (request_dma(dev->dma, "3c515")) {
+ pr_cont(", DMA %d allocation failed", dev->dma);
+ dev->dma = 0;
+ } else
+ pr_cont(", DMA %d", dev->dma);
+ }
+ pr_cont(", IRQ %d\n", dev->irq);
+ /* Tell them about an invalid IRQ. */
+ if (corkscrew_debug && (dev->irq <= 0 || dev->irq > 15))
+ pr_warning(" *** Warning: this IRQ is unlikely to work! ***\n");
+
+ {
+ static const char * const ram_split[] = {
+ "5:3", "3:1", "1:1", "3:5"
+ };
+ __u32 config;
+ EL3WINDOW(3);
+ vp->available_media = inw(ioaddr + Wn3_Options);
+ config = inl(ioaddr + Wn3_Config);
+ if (corkscrew_debug > 1)
+ pr_info(" Internal config register is %4.4x, transceivers %#x.\n",
+ config, inw(ioaddr + Wn3_Options));
+ pr_info(" %dK %s-wide RAM %s Rx:Tx split, %s%s interface.\n",
+ 8 << config & Ram_size,
+ config & Ram_width ? "word" : "byte",
+ ram_split[(config & Ram_split) >> Ram_split_shift],
+ config & Autoselect ? "autoselect/" : "",
+ media_tbl[(config & Xcvr) >> Xcvr_shift].name);
+ vp->default_media = (config & Xcvr) >> Xcvr_shift;
+ vp->autoselect = config & Autoselect ? 1 : 0;
+ dev->if_port = vp->default_media;
+ }
+ if (vp->media_override != 7) {
+ pr_info(" Media override to transceiver type %d (%s).\n",
+ vp->media_override,
+ media_tbl[vp->media_override].name);
+ dev->if_port = vp->media_override;
+ }
+
+ vp->capabilities = eeprom[16];
+ vp->full_bus_master_tx = (vp->capabilities & 0x20) ? 1 : 0;
+ /* Rx is broken at 10mbps, so we always disable it. */
+ /* vp->full_bus_master_rx = 0; */
+ vp->full_bus_master_rx = (vp->capabilities & 0x20) ? 1 : 0;
+
+ /* The 3c51x-specific entries in the device structure. */
+ dev->netdev_ops = &netdev_ops;
+ dev->watchdog_timeo = (400 * HZ) / 1000;
+ dev->ethtool_ops = &netdev_ethtool_ops;
+
+ return register_netdev(dev);
+}
+
+
+static int corkscrew_open(struct net_device *dev)
+{
+ int ioaddr = dev->base_addr;
+ struct corkscrew_private *vp = netdev_priv(dev);
+ __u32 config;
+ int i;
+
+ /* Before initializing select the active media port. */
+ EL3WINDOW(3);
+ if (vp->full_duplex)
+ outb(0x20, ioaddr + Wn3_MAC_Ctrl); /* Set the full-duplex bit. */
+ config = inl(ioaddr + Wn3_Config);
+
+ if (vp->media_override != 7) {
+ if (corkscrew_debug > 1)
+ pr_info("%s: Media override to transceiver %d (%s).\n",
+ dev->name, vp->media_override,
+ media_tbl[vp->media_override].name);
+ dev->if_port = vp->media_override;
+ } else if (vp->autoselect) {
+ /* Find first available media type, starting with 100baseTx. */
+ dev->if_port = 4;
+ while (!(vp->available_media & media_tbl[dev->if_port].mask))
+ dev->if_port = media_tbl[dev->if_port].next;
+
+ if (corkscrew_debug > 1)
+ pr_debug("%s: Initial media type %s.\n",
+ dev->name, media_tbl[dev->if_port].name);
+
+ init_timer(&vp->timer);
+ vp->timer.expires = jiffies + media_tbl[dev->if_port].wait;
+ vp->timer.data = (unsigned long) dev;
+ vp->timer.function = corkscrew_timer; /* timer handler */
+ add_timer(&vp->timer);
+ } else
+ dev->if_port = vp->default_media;
+
+ config = (config & ~Xcvr) | (dev->if_port << Xcvr_shift);
+ outl(config, ioaddr + Wn3_Config);
+
+ if (corkscrew_debug > 1) {
+ pr_debug("%s: corkscrew_open() InternalConfig %8.8x.\n",
+ dev->name, config);
+ }
+
+ outw(TxReset, ioaddr + EL3_CMD);
+ for (i = 20; i >= 0; i--)
+ if (!(inw(ioaddr + EL3_STATUS) & CmdInProgress))
+ break;
+
+ outw(RxReset, ioaddr + EL3_CMD);
+ /* Wait a few ticks for the RxReset command to complete. */
+ for (i = 20; i >= 0; i--)
+ if (!(inw(ioaddr + EL3_STATUS) & CmdInProgress))
+ break;
+
+ outw(SetStatusEnb | 0x00, ioaddr + EL3_CMD);
+
+ /* Use the now-standard shared IRQ implementation. */
+ if (vp->capabilities == 0x11c7) {
+ /* Corkscrew: Cannot share ISA resources. */
+ if (dev->irq == 0 ||
+ dev->dma == 0 ||
+ request_irq(dev->irq, corkscrew_interrupt, 0,
+ vp->product_name, dev))
+ return -EAGAIN;
+ enable_dma(dev->dma);
+ set_dma_mode(dev->dma, DMA_MODE_CASCADE);
+ } else if (request_irq(dev->irq, corkscrew_interrupt, IRQF_SHARED,
+ vp->product_name, dev)) {
+ return -EAGAIN;
+ }
+
+ if (corkscrew_debug > 1) {
+ EL3WINDOW(4);
+ pr_debug("%s: corkscrew_open() irq %d media status %4.4x.\n",
+ dev->name, dev->irq, inw(ioaddr + Wn4_Media));
+ }
+
+ /* Set the station address and mask in window 2 each time opened. */
+ EL3WINDOW(2);
+ for (i = 0; i < 6; i++)
+ outb(dev->dev_addr[i], ioaddr + i);
+ for (; i < 12; i += 2)
+ outw(0, ioaddr + i);
+
+ if (dev->if_port == 3)
+ /* Start the thinnet transceiver. We should really wait 50ms... */
+ outw(StartCoax, ioaddr + EL3_CMD);
+ EL3WINDOW(4);
+ outw((inw(ioaddr + Wn4_Media) & ~(Media_10TP | Media_SQE)) |
+ media_tbl[dev->if_port].media_bits, ioaddr + Wn4_Media);
+
+ /* Switch to the stats window, and clear all stats by reading. */
+ outw(StatsDisable, ioaddr + EL3_CMD);
+ EL3WINDOW(6);
+ for (i = 0; i < 10; i++)
+ inb(ioaddr + i);
+ inw(ioaddr + 10);
+ inw(ioaddr + 12);
+ /* New: On the Vortex we must also clear the BadSSD counter. */
+ EL3WINDOW(4);
+ inb(ioaddr + 12);
+ /* ..and on the Boomerang we enable the extra statistics bits. */
+ outw(0x0040, ioaddr + Wn4_NetDiag);
+
+ /* Switch to register set 7 for normal use. */
+ EL3WINDOW(7);
+
+ if (vp->full_bus_master_rx) { /* Boomerang bus master. */
+ vp->cur_rx = vp->dirty_rx = 0;
+ if (corkscrew_debug > 2)
+ pr_debug("%s: Filling in the Rx ring.\n", dev->name);
+ for (i = 0; i < RX_RING_SIZE; i++) {
+ struct sk_buff *skb;
+ if (i < (RX_RING_SIZE - 1))
+ vp->rx_ring[i].next =
+ isa_virt_to_bus(&vp->rx_ring[i + 1]);
+ else
+ vp->rx_ring[i].next = 0;
+ vp->rx_ring[i].status = 0; /* Clear complete bit. */
+ vp->rx_ring[i].length = PKT_BUF_SZ | 0x80000000;
+ skb = dev_alloc_skb(PKT_BUF_SZ);
+ vp->rx_skbuff[i] = skb;
+ if (skb == NULL)
+ break; /* Bad news! */
+ skb->dev = dev; /* Mark as being used by this device. */
+ skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
+ vp->rx_ring[i].addr = isa_virt_to_bus(skb->data);
+ }
+ if (i != 0)
+ vp->rx_ring[i - 1].next =
+ isa_virt_to_bus(&vp->rx_ring[0]); /* Wrap the ring. */
+ outl(isa_virt_to_bus(&vp->rx_ring[0]), ioaddr + UpListPtr);
+ }
+ if (vp->full_bus_master_tx) { /* Boomerang bus master Tx. */
+ vp->cur_tx = vp->dirty_tx = 0;
+ outb(PKT_BUF_SZ >> 8, ioaddr + TxFreeThreshold); /* Room for a packet. */
+ /* Clear the Tx ring. */
+ for (i = 0; i < TX_RING_SIZE; i++)
+ vp->tx_skbuff[i] = NULL;
+ outl(0, ioaddr + DownListPtr);
+ }
+ /* Set receiver mode: presumably accept b-case and phys addr only. */
+ set_rx_mode(dev);
+ outw(StatsEnable, ioaddr + EL3_CMD); /* Turn on statistics. */
+
+ netif_start_queue(dev);
+
+ outw(RxEnable, ioaddr + EL3_CMD); /* Enable the receiver. */
+ outw(TxEnable, ioaddr + EL3_CMD); /* Enable transmitter. */
+ /* Allow status bits to be seen. */
+ outw(SetStatusEnb | AdapterFailure | IntReq | StatsFull |
+ (vp->full_bus_master_tx ? DownComplete : TxAvailable) |
+ (vp->full_bus_master_rx ? UpComplete : RxComplete) |
+ (vp->bus_master ? DMADone : 0), ioaddr + EL3_CMD);
+ /* Ack all pending events, and set active indicator mask. */
+ outw(AckIntr | IntLatch | TxAvailable | RxEarly | IntReq,
+ ioaddr + EL3_CMD);
+ outw(SetIntrEnb | IntLatch | TxAvailable | RxComplete | StatsFull
+ | (vp->bus_master ? DMADone : 0) | UpComplete | DownComplete,
+ ioaddr + EL3_CMD);
+
+ return 0;
+}
+
+static void corkscrew_timer(unsigned long data)
+{
+#ifdef AUTOMEDIA
+ struct net_device *dev = (struct net_device *) data;
+ struct corkscrew_private *vp = netdev_priv(dev);
+ int ioaddr = dev->base_addr;
+ unsigned long flags;
+ int ok = 0;
+
+ if (corkscrew_debug > 1)
+ pr_debug("%s: Media selection timer tick happened, %s.\n",
+ dev->name, media_tbl[dev->if_port].name);
+
+ spin_lock_irqsave(&vp->lock, flags);
+
+ {
+ int old_window = inw(ioaddr + EL3_CMD) >> 13;
+ int media_status;
+ EL3WINDOW(4);
+ media_status = inw(ioaddr + Wn4_Media);
+ switch (dev->if_port) {
+ case 0:
+ case 4:
+ case 5: /* 10baseT, 100baseTX, 100baseFX */
+ if (media_status & Media_LnkBeat) {
+ ok = 1;
+ if (corkscrew_debug > 1)
+ pr_debug("%s: Media %s has link beat, %x.\n",
+ dev->name,
+ media_tbl[dev->if_port].name,
+ media_status);
+ } else if (corkscrew_debug > 1)
+ pr_debug("%s: Media %s is has no link beat, %x.\n",
+ dev->name,
+ media_tbl[dev->if_port].name,
+ media_status);
+
+ break;
+ default: /* Other media types handled by Tx timeouts. */
+ if (corkscrew_debug > 1)
+ pr_debug("%s: Media %s is has no indication, %x.\n",
+ dev->name,
+ media_tbl[dev->if_port].name,
+ media_status);
+ ok = 1;
+ }
+ if (!ok) {
+ __u32 config;
+
+ do {
+ dev->if_port =
+ media_tbl[dev->if_port].next;
+ }
+ while (!(vp->available_media & media_tbl[dev->if_port].mask));
+
+ if (dev->if_port == 8) { /* Go back to default. */
+ dev->if_port = vp->default_media;
+ if (corkscrew_debug > 1)
+ pr_debug("%s: Media selection failing, using default %s port.\n",
+ dev->name,
+ media_tbl[dev->if_port].name);
+ } else {
+ if (corkscrew_debug > 1)
+ pr_debug("%s: Media selection failed, now trying %s port.\n",
+ dev->name,
+ media_tbl[dev->if_port].name);
+ vp->timer.expires = jiffies + media_tbl[dev->if_port].wait;
+ add_timer(&vp->timer);
+ }
+ outw((media_status & ~(Media_10TP | Media_SQE)) |
+ media_tbl[dev->if_port].media_bits,
+ ioaddr + Wn4_Media);
+
+ EL3WINDOW(3);
+ config = inl(ioaddr + Wn3_Config);
+ config = (config & ~Xcvr) | (dev->if_port << Xcvr_shift);
+ outl(config, ioaddr + Wn3_Config);
+
+ outw(dev->if_port == 3 ? StartCoax : StopCoax,
+ ioaddr + EL3_CMD);
+ }
+ EL3WINDOW(old_window);
+ }
+
+ spin_unlock_irqrestore(&vp->lock, flags);
+ if (corkscrew_debug > 1)
+ pr_debug("%s: Media selection timer finished, %s.\n",
+ dev->name, media_tbl[dev->if_port].name);
+
+#endif /* AUTOMEDIA */
+}
+
+static void corkscrew_timeout(struct net_device *dev)
+{
+ int i;
+ struct corkscrew_private *vp = netdev_priv(dev);
+ int ioaddr = dev->base_addr;
+
+ pr_warning("%s: transmit timed out, tx_status %2.2x status %4.4x.\n",
+ dev->name, inb(ioaddr + TxStatus),
+ inw(ioaddr + EL3_STATUS));
+ /* Slight code bloat to be user friendly. */
+ if ((inb(ioaddr + TxStatus) & 0x88) == 0x88)
+ pr_warning("%s: Transmitter encountered 16 collisions --"
+ " network cable problem?\n", dev->name);
+#ifndef final_version
+ pr_debug(" Flags; bus-master %d, full %d; dirty %d current %d.\n",
+ vp->full_bus_master_tx, vp->tx_full, vp->dirty_tx,
+ vp->cur_tx);
+ pr_debug(" Down list %8.8x vs. %p.\n", inl(ioaddr + DownListPtr),
+ &vp->tx_ring[0]);
+ for (i = 0; i < TX_RING_SIZE; i++) {
+ pr_debug(" %d: %p length %8.8x status %8.8x\n", i,
+ &vp->tx_ring[i],
+ vp->tx_ring[i].length, vp->tx_ring[i].status);
+ }
+#endif
+ /* Issue TX_RESET and TX_START commands. */
+ outw(TxReset, ioaddr + EL3_CMD);
+ for (i = 20; i >= 0; i--)
+ if (!(inw(ioaddr + EL3_STATUS) & CmdInProgress))
+ break;
+ outw(TxEnable, ioaddr + EL3_CMD);
+ dev->trans_start = jiffies; /* prevent tx timeout */
+ dev->stats.tx_errors++;
+ dev->stats.tx_dropped++;
+ netif_wake_queue(dev);
+}
+
+static netdev_tx_t corkscrew_start_xmit(struct sk_buff *skb,
+ struct net_device *dev)
+{
+ struct corkscrew_private *vp = netdev_priv(dev);
+ int ioaddr = dev->base_addr;
+
+ /* Block a timer-based transmit from overlapping. */
+
+ netif_stop_queue(dev);
+
+ if (vp->full_bus_master_tx) { /* BOOMERANG bus-master */
+ /* Calculate the next Tx descriptor entry. */
+ int entry = vp->cur_tx % TX_RING_SIZE;
+ struct boom_tx_desc *prev_entry;
+ unsigned long flags;
+ int i;
+
+ if (vp->tx_full) /* No room to transmit with */
+ return NETDEV_TX_BUSY;
+ if (vp->cur_tx != 0)
+ prev_entry = &vp->tx_ring[(vp->cur_tx - 1) % TX_RING_SIZE];
+ else
+ prev_entry = NULL;
+ if (corkscrew_debug > 3)
+ pr_debug("%s: Trying to send a packet, Tx index %d.\n",
+ dev->name, vp->cur_tx);
+ /* vp->tx_full = 1; */
+ vp->tx_skbuff[entry] = skb;
+ vp->tx_ring[entry].next = 0;
+ vp->tx_ring[entry].addr = isa_virt_to_bus(skb->data);
+ vp->tx_ring[entry].length = skb->len | 0x80000000;
+ vp->tx_ring[entry].status = skb->len | 0x80000000;
+
+ spin_lock_irqsave(&vp->lock, flags);
+ outw(DownStall, ioaddr + EL3_CMD);
+ /* Wait for the stall to complete. */
+ for (i = 20; i >= 0; i--)
+ if ((inw(ioaddr + EL3_STATUS) & CmdInProgress) == 0)
+ break;
+ if (prev_entry)
+ prev_entry->next = isa_virt_to_bus(&vp->tx_ring[entry]);
+ if (inl(ioaddr + DownListPtr) == 0) {
+ outl(isa_virt_to_bus(&vp->tx_ring[entry]),
+ ioaddr + DownListPtr);
+ queued_packet++;
+ }
+ outw(DownUnstall, ioaddr + EL3_CMD);
+ spin_unlock_irqrestore(&vp->lock, flags);
+
+ vp->cur_tx++;
+ if (vp->cur_tx - vp->dirty_tx > TX_RING_SIZE - 1)
+ vp->tx_full = 1;
+ else { /* Clear previous interrupt enable. */
+ if (prev_entry)
+ prev_entry->status &= ~0x80000000;
+ netif_wake_queue(dev);
+ }
+ return NETDEV_TX_OK;
+ }
+ /* Put out the doubleword header... */
+ outl(skb->len, ioaddr + TX_FIFO);
+ dev->stats.tx_bytes += skb->len;
+#ifdef VORTEX_BUS_MASTER
+ if (vp->bus_master) {
+ /* Set the bus-master controller to transfer the packet. */
+ outl((int) (skb->data), ioaddr + Wn7_MasterAddr);
+ outw((skb->len + 3) & ~3, ioaddr + Wn7_MasterLen);
+ vp->tx_skb = skb;
+ outw(StartDMADown, ioaddr + EL3_CMD);
+ /* queue will be woken at the DMADone interrupt. */
+ } else {
+ /* ... and the packet rounded to a doubleword. */
+ outsl(ioaddr + TX_FIFO, skb->data, (skb->len + 3) >> 2);
+ dev_kfree_skb(skb);
+ if (inw(ioaddr + TxFree) > 1536) {
+ netif_wake_queue(dev);
+ } else
+ /* Interrupt us when the FIFO has room for max-sized packet. */
+ outw(SetTxThreshold + (1536 >> 2),
+ ioaddr + EL3_CMD);
+ }
+#else
+ /* ... and the packet rounded to a doubleword. */
+ outsl(ioaddr + TX_FIFO, skb->data, (skb->len + 3) >> 2);
+ dev_kfree_skb(skb);
+ if (inw(ioaddr + TxFree) > 1536) {
+ netif_wake_queue(dev);
+ } else
+ /* Interrupt us when the FIFO has room for max-sized packet. */
+ outw(SetTxThreshold + (1536 >> 2), ioaddr + EL3_CMD);
+#endif /* bus master */
+
+
+ /* Clear the Tx status stack. */
+ {
+ short tx_status;
+ int i = 4;
+
+ while (--i > 0 && (tx_status = inb(ioaddr + TxStatus)) > 0) {
+ if (tx_status & 0x3C) { /* A Tx-disabling error occurred. */
+ if (corkscrew_debug > 2)
+ pr_debug("%s: Tx error, status %2.2x.\n",
+ dev->name, tx_status);
+ if (tx_status & 0x04)
+ dev->stats.tx_fifo_errors++;
+ if (tx_status & 0x38)
+ dev->stats.tx_aborted_errors++;
+ if (tx_status & 0x30) {
+ int j;
+ outw(TxReset, ioaddr + EL3_CMD);
+ for (j = 20; j >= 0; j--)
+ if (!(inw(ioaddr + EL3_STATUS) & CmdInProgress))
+ break;
+ }
+ outw(TxEnable, ioaddr + EL3_CMD);
+ }
+ outb(0x00, ioaddr + TxStatus); /* Pop the status stack. */
+ }
+ }
+ return NETDEV_TX_OK;
+}
+
+/* The interrupt handler does all of the Rx thread work and cleans up
+ after the Tx thread. */
+
+static irqreturn_t corkscrew_interrupt(int irq, void *dev_id)
+{
+ /* Use the now-standard shared IRQ implementation. */
+ struct net_device *dev = dev_id;
+ struct corkscrew_private *lp = netdev_priv(dev);
+ int ioaddr, status;
+ int latency;
+ int i = max_interrupt_work;
+
+ ioaddr = dev->base_addr;
+ latency = inb(ioaddr + Timer);
+
+ spin_lock(&lp->lock);
+
+ status = inw(ioaddr + EL3_STATUS);
+
+ if (corkscrew_debug > 4)
+ pr_debug("%s: interrupt, status %4.4x, timer %d.\n",
+ dev->name, status, latency);
+ if ((status & 0xE000) != 0xE000) {
+ static int donedidthis;
+ /* Some interrupt controllers store a bogus interrupt from boot-time.
+ Ignore a single early interrupt, but don't hang the machine for
+ other interrupt problems. */
+ if (donedidthis++ > 100) {
+ pr_err("%s: Bogus interrupt, bailing. Status %4.4x, start=%d.\n",
+ dev->name, status, netif_running(dev));
+ free_irq(dev->irq, dev);
+ dev->irq = -1;
+ }
+ }
+
+ do {
+ if (corkscrew_debug > 5)
+ pr_debug("%s: In interrupt loop, status %4.4x.\n",
+ dev->name, status);
+ if (status & RxComplete)
+ corkscrew_rx(dev);
+
+ if (status & TxAvailable) {
+ if (corkscrew_debug > 5)
+ pr_debug(" TX room bit was handled.\n");
+ /* There's room in the FIFO for a full-sized packet. */
+ outw(AckIntr | TxAvailable, ioaddr + EL3_CMD);
+ netif_wake_queue(dev);
+ }
+ if (status & DownComplete) {
+ unsigned int dirty_tx = lp->dirty_tx;
+
+ while (lp->cur_tx - dirty_tx > 0) {
+ int entry = dirty_tx % TX_RING_SIZE;
+ if (inl(ioaddr + DownListPtr) == isa_virt_to_bus(&lp->tx_ring[entry]))
+ break; /* It still hasn't been processed. */
+ if (lp->tx_skbuff[entry]) {
+ dev_kfree_skb_irq(lp->tx_skbuff[entry]);
+ lp->tx_skbuff[entry] = NULL;
+ }
+ dirty_tx++;
+ }
+ lp->dirty_tx = dirty_tx;
+ outw(AckIntr | DownComplete, ioaddr + EL3_CMD);
+ if (lp->tx_full && (lp->cur_tx - dirty_tx <= TX_RING_SIZE - 1)) {
+ lp->tx_full = 0;
+ netif_wake_queue(dev);
+ }
+ }
+#ifdef VORTEX_BUS_MASTER
+ if (status & DMADone) {
+ outw(0x1000, ioaddr + Wn7_MasterStatus); /* Ack the event. */
+ dev_kfree_skb_irq(lp->tx_skb); /* Release the transferred buffer */
+ netif_wake_queue(dev);
+ }
+#endif
+ if (status & UpComplete) {
+ boomerang_rx(dev);
+ outw(AckIntr | UpComplete, ioaddr + EL3_CMD);
+ }
+ if (status & (AdapterFailure | RxEarly | StatsFull)) {
+ /* Handle all uncommon interrupts at once. */
+ if (status & RxEarly) { /* Rx early is unused. */
+ corkscrew_rx(dev);
+ outw(AckIntr | RxEarly, ioaddr + EL3_CMD);
+ }
+ if (status & StatsFull) { /* Empty statistics. */
+ static int DoneDidThat;
+ if (corkscrew_debug > 4)
+ pr_debug("%s: Updating stats.\n", dev->name);
+ update_stats(ioaddr, dev);
+ /* DEBUG HACK: Disable statistics as an interrupt source. */
+ /* This occurs when we have the wrong media type! */
+ if (DoneDidThat == 0 && inw(ioaddr + EL3_STATUS) & StatsFull) {
+ int win, reg;
+ pr_notice("%s: Updating stats failed, disabling stats as an interrupt source.\n",
+ dev->name);
+ for (win = 0; win < 8; win++) {
+ EL3WINDOW(win);
+ pr_notice("Vortex window %d:", win);
+ for (reg = 0; reg < 16; reg++)
+ pr_cont(" %2.2x", inb(ioaddr + reg));
+ pr_cont("\n");
+ }
+ EL3WINDOW(7);
+ outw(SetIntrEnb | TxAvailable |
+ RxComplete | AdapterFailure |
+ UpComplete | DownComplete |
+ TxComplete, ioaddr + EL3_CMD);
+ DoneDidThat++;
+ }
+ }
+ if (status & AdapterFailure) {
+ /* Adapter failure requires Rx reset and reinit. */
+ outw(RxReset, ioaddr + EL3_CMD);
+ /* Set the Rx filter to the current state. */
+ set_rx_mode(dev);
+ outw(RxEnable, ioaddr + EL3_CMD); /* Re-enable the receiver. */
+ outw(AckIntr | AdapterFailure,
+ ioaddr + EL3_CMD);
+ }
+ }
+
+ if (--i < 0) {
+ pr_err("%s: Too much work in interrupt, status %4.4x. Disabling functions (%4.4x).\n",
+ dev->name, status, SetStatusEnb | ((~status) & 0x7FE));
+ /* Disable all pending interrupts. */
+ outw(SetStatusEnb | ((~status) & 0x7FE), ioaddr + EL3_CMD);
+ outw(AckIntr | 0x7FF, ioaddr + EL3_CMD);
+ break;
+ }
+ /* Acknowledge the IRQ. */
+ outw(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
+
+ } while ((status = inw(ioaddr + EL3_STATUS)) & (IntLatch | RxComplete));
+
+ spin_unlock(&lp->lock);
+
+ if (corkscrew_debug > 4)
+ pr_debug("%s: exiting interrupt, status %4.4x.\n", dev->name, status);
+ return IRQ_HANDLED;
+}
+
+static int corkscrew_rx(struct net_device *dev)
+{
+ int ioaddr = dev->base_addr;
+ int i;
+ short rx_status;
+
+ if (corkscrew_debug > 5)
+ pr_debug(" In rx_packet(), status %4.4x, rx_status %4.4x.\n",
+ inw(ioaddr + EL3_STATUS), inw(ioaddr + RxStatus));
+ while ((rx_status = inw(ioaddr + RxStatus)) > 0) {
+ if (rx_status & 0x4000) { /* Error, update stats. */
+ unsigned char rx_error = inb(ioaddr + RxErrors);
+ if (corkscrew_debug > 2)
+ pr_debug(" Rx error: status %2.2x.\n",
+ rx_error);
+ dev->stats.rx_errors++;
+ if (rx_error & 0x01)
+ dev->stats.rx_over_errors++;
+ if (rx_error & 0x02)
+ dev->stats.rx_length_errors++;
+ if (rx_error & 0x04)
+ dev->stats.rx_frame_errors++;
+ if (rx_error & 0x08)
+ dev->stats.rx_crc_errors++;
+ if (rx_error & 0x10)
+ dev->stats.rx_length_errors++;
+ } else {
+ /* The packet length: up to 4.5K!. */
+ short pkt_len = rx_status & 0x1fff;
+ struct sk_buff *skb;
+
+ skb = dev_alloc_skb(pkt_len + 5 + 2);
+ if (corkscrew_debug > 4)
+ pr_debug("Receiving packet size %d status %4.4x.\n",
+ pkt_len, rx_status);
+ if (skb != NULL) {
+ skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
+ /* 'skb_put()' points to the start of sk_buff data area. */
+ insl(ioaddr + RX_FIFO,
+ skb_put(skb, pkt_len),
+ (pkt_len + 3) >> 2);
+ outw(RxDiscard, ioaddr + EL3_CMD); /* Pop top Rx packet. */
+ skb->protocol = eth_type_trans(skb, dev);
+ netif_rx(skb);
+ dev->stats.rx_packets++;
+ dev->stats.rx_bytes += pkt_len;
+ /* Wait a limited time to go to next packet. */
+ for (i = 200; i >= 0; i--)
+ if (! (inw(ioaddr + EL3_STATUS) & CmdInProgress))
+ break;
+ continue;
+ } else if (corkscrew_debug)
+ pr_debug("%s: Couldn't allocate a sk_buff of size %d.\n", dev->name, pkt_len);
+ }
+ outw(RxDiscard, ioaddr + EL3_CMD);
+ dev->stats.rx_dropped++;
+ /* Wait a limited time to skip this packet. */
+ for (i = 200; i >= 0; i--)
+ if (!(inw(ioaddr + EL3_STATUS) & CmdInProgress))
+ break;
+ }
+ return 0;
+}
+
+static int boomerang_rx(struct net_device *dev)
+{
+ struct corkscrew_private *vp = netdev_priv(dev);
+ int entry = vp->cur_rx % RX_RING_SIZE;
+ int ioaddr = dev->base_addr;
+ int rx_status;
+
+ if (corkscrew_debug > 5)
+ pr_debug(" In boomerang_rx(), status %4.4x, rx_status %4.4x.\n",
+ inw(ioaddr + EL3_STATUS), inw(ioaddr + RxStatus));
+ while ((rx_status = vp->rx_ring[entry].status) & RxDComplete) {
+ if (rx_status & RxDError) { /* Error, update stats. */
+ unsigned char rx_error = rx_status >> 16;
+ if (corkscrew_debug > 2)
+ pr_debug(" Rx error: status %2.2x.\n",
+ rx_error);
+ dev->stats.rx_errors++;
+ if (rx_error & 0x01)
+ dev->stats.rx_over_errors++;
+ if (rx_error & 0x02)
+ dev->stats.rx_length_errors++;
+ if (rx_error & 0x04)
+ dev->stats.rx_frame_errors++;
+ if (rx_error & 0x08)
+ dev->stats.rx_crc_errors++;
+ if (rx_error & 0x10)
+ dev->stats.rx_length_errors++;
+ } else {
+ /* The packet length: up to 4.5K!. */
+ short pkt_len = rx_status & 0x1fff;
+ struct sk_buff *skb;
+
+ dev->stats.rx_bytes += pkt_len;
+ if (corkscrew_debug > 4)
+ pr_debug("Receiving packet size %d status %4.4x.\n",
+ pkt_len, rx_status);
+
+ /* Check if the packet is long enough to just accept without
+ copying to a properly sized skbuff. */
+ if (pkt_len < rx_copybreak &&
+ (skb = dev_alloc_skb(pkt_len + 4)) != NULL) {
+ skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
+ /* 'skb_put()' points to the start of sk_buff data area. */
+ memcpy(skb_put(skb, pkt_len),
+ isa_bus_to_virt(vp->rx_ring[entry].
+ addr), pkt_len);
+ rx_copy++;
+ } else {
+ void *temp;
+ /* Pass up the skbuff already on the Rx ring. */
+ skb = vp->rx_skbuff[entry];
+ vp->rx_skbuff[entry] = NULL;
+ temp = skb_put(skb, pkt_len);
+ /* Remove this checking code for final release. */
+ if (isa_bus_to_virt(vp->rx_ring[entry].addr) != temp)
+ pr_warning("%s: Warning -- the skbuff addresses do not match"
+ " in boomerang_rx: %p vs. %p / %p.\n",
+ dev->name,
+ isa_bus_to_virt(vp->
+ rx_ring[entry].
+ addr), skb->head,
+ temp);
+ rx_nocopy++;
+ }
+ skb->protocol = eth_type_trans(skb, dev);
+ netif_rx(skb);
+ dev->stats.rx_packets++;
+ }
+ entry = (++vp->cur_rx) % RX_RING_SIZE;
+ }
+ /* Refill the Rx ring buffers. */
+ for (; vp->cur_rx - vp->dirty_rx > 0; vp->dirty_rx++) {
+ struct sk_buff *skb;
+ entry = vp->dirty_rx % RX_RING_SIZE;
+ if (vp->rx_skbuff[entry] == NULL) {
+ skb = dev_alloc_skb(PKT_BUF_SZ);
+ if (skb == NULL)
+ break; /* Bad news! */
+ skb->dev = dev; /* Mark as being used by this device. */
+ skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
+ vp->rx_ring[entry].addr = isa_virt_to_bus(skb->data);
+ vp->rx_skbuff[entry] = skb;
+ }
+ vp->rx_ring[entry].status = 0; /* Clear complete bit. */
+ }
+ return 0;
+}
+
+static int corkscrew_close(struct net_device *dev)
+{
+ struct corkscrew_private *vp = netdev_priv(dev);
+ int ioaddr = dev->base_addr;
+ int i;
+
+ netif_stop_queue(dev);
+
+ if (corkscrew_debug > 1) {
+ pr_debug("%s: corkscrew_close() status %4.4x, Tx status %2.2x.\n",
+ dev->name, inw(ioaddr + EL3_STATUS),
+ inb(ioaddr + TxStatus));
+ pr_debug("%s: corkscrew close stats: rx_nocopy %d rx_copy %d tx_queued %d.\n",
+ dev->name, rx_nocopy, rx_copy, queued_packet);
+ }
+
+ del_timer(&vp->timer);
+
+ /* Turn off statistics ASAP. We update lp->stats below. */
+ outw(StatsDisable, ioaddr + EL3_CMD);
+
+ /* Disable the receiver and transmitter. */
+ outw(RxDisable, ioaddr + EL3_CMD);
+ outw(TxDisable, ioaddr + EL3_CMD);
+
+ if (dev->if_port == XCVR_10base2)
+ /* Turn off thinnet power. Green! */
+ outw(StopCoax, ioaddr + EL3_CMD);
+
+ free_irq(dev->irq, dev);
+
+ outw(SetIntrEnb | 0x0000, ioaddr + EL3_CMD);
+
+ update_stats(ioaddr, dev);
+ if (vp->full_bus_master_rx) { /* Free Boomerang bus master Rx buffers. */
+ outl(0, ioaddr + UpListPtr);
+ for (i = 0; i < RX_RING_SIZE; i++)
+ if (vp->rx_skbuff[i]) {
+ dev_kfree_skb(vp->rx_skbuff[i]);
+ vp->rx_skbuff[i] = NULL;
+ }
+ }
+ if (vp->full_bus_master_tx) { /* Free Boomerang bus master Tx buffers. */
+ outl(0, ioaddr + DownListPtr);
+ for (i = 0; i < TX_RING_SIZE; i++)
+ if (vp->tx_skbuff[i]) {
+ dev_kfree_skb(vp->tx_skbuff[i]);
+ vp->tx_skbuff[i] = NULL;
+ }
+ }
+
+ return 0;
+}
+
+static struct net_device_stats *corkscrew_get_stats(struct net_device *dev)
+{
+ struct corkscrew_private *vp = netdev_priv(dev);
+ unsigned long flags;
+
+ if (netif_running(dev)) {
+ spin_lock_irqsave(&vp->lock, flags);
+ update_stats(dev->base_addr, dev);
+ spin_unlock_irqrestore(&vp->lock, flags);
+ }
+ return &dev->stats;
+}
+
+/* Update statistics.
+ Unlike with the EL3 we need not worry about interrupts changing
+ the window setting from underneath us, but we must still guard
+ against a race condition with a StatsUpdate interrupt updating the
+ table. This is done by checking that the ASM (!) code generated uses
+ atomic updates with '+='.
+ */
+static void update_stats(int ioaddr, struct net_device *dev)
+{
+ /* Unlike the 3c5x9 we need not turn off stats updates while reading. */
+ /* Switch to the stats window, and read everything. */
+ EL3WINDOW(6);
+ dev->stats.tx_carrier_errors += inb(ioaddr + 0);
+ dev->stats.tx_heartbeat_errors += inb(ioaddr + 1);
+ /* Multiple collisions. */ inb(ioaddr + 2);
+ dev->stats.collisions += inb(ioaddr + 3);
+ dev->stats.tx_window_errors += inb(ioaddr + 4);
+ dev->stats.rx_fifo_errors += inb(ioaddr + 5);
+ dev->stats.tx_packets += inb(ioaddr + 6);
+ dev->stats.tx_packets += (inb(ioaddr + 9) & 0x30) << 4;
+ /* Rx packets */ inb(ioaddr + 7);
+ /* Must read to clear */
+ /* Tx deferrals */ inb(ioaddr + 8);
+ /* Don't bother with register 9, an extension of registers 6&7.
+ If we do use the 6&7 values the atomic update assumption above
+ is invalid. */
+ inw(ioaddr + 10); /* Total Rx and Tx octets. */
+ inw(ioaddr + 12);
+ /* New: On the Vortex we must also clear the BadSSD counter. */
+ EL3WINDOW(4);
+ inb(ioaddr + 12);
+
+ /* We change back to window 7 (not 1) with the Vortex. */
+ EL3WINDOW(7);
+}
+
+/* This new version of set_rx_mode() supports v1.4 kernels.
+ The Vortex chip has no documented multicast filter, so the only
+ multicast setting is to receive all multicast frames. At least
+ the chip has a very clean way to set the mode, unlike many others. */
+static void set_rx_mode(struct net_device *dev)
+{
+ int ioaddr = dev->base_addr;
+ short new_mode;
+
+ if (dev->flags & IFF_PROMISC) {
+ if (corkscrew_debug > 3)
+ pr_debug("%s: Setting promiscuous mode.\n",
+ dev->name);
+ new_mode = SetRxFilter | RxStation | RxMulticast | RxBroadcast | RxProm;
+ } else if (!netdev_mc_empty(dev) || dev->flags & IFF_ALLMULTI) {
+ new_mode = SetRxFilter | RxStation | RxMulticast | RxBroadcast;
+ } else
+ new_mode = SetRxFilter | RxStation | RxBroadcast;
+
+ outw(new_mode, ioaddr + EL3_CMD);
+}
+
+static void netdev_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ strcpy(info->driver, DRV_NAME);
+ strcpy(info->version, DRV_VERSION);
+ sprintf(info->bus_info, "ISA 0x%lx", dev->base_addr);
+}
+
+static u32 netdev_get_msglevel(struct net_device *dev)
+{
+ return corkscrew_debug;
+}
+
+static void netdev_set_msglevel(struct net_device *dev, u32 level)
+{
+ corkscrew_debug = level;
+}
+
+static const struct ethtool_ops netdev_ethtool_ops = {
+ .get_drvinfo = netdev_get_drvinfo,
+ .get_msglevel = netdev_get_msglevel,
+ .set_msglevel = netdev_set_msglevel,
+};
+
+
+#ifdef MODULE
+void cleanup_module(void)
+{
+ while (!list_empty(&root_corkscrew_dev)) {
+ struct net_device *dev;
+ struct corkscrew_private *vp;
+
+ vp = list_entry(root_corkscrew_dev.next,
+ struct corkscrew_private, list);
+ dev = vp->our_dev;
+ unregister_netdev(dev);
+ cleanup_card(dev);
+ free_netdev(dev);
+ }
+}
+#endif /* MODULE */
--- /dev/null
+/* 3c574.c: A PCMCIA ethernet driver for the 3com 3c574 "RoadRunner".
+
+ Written 1993-1998 by
+ Donald Becker, becker@scyld.com, (driver core) and
+ David Hinds, dahinds@users.sourceforge.net (from his PC card code).
+ Locking fixes (C) Copyright 2003 Red Hat Inc
+
+ This software may be used and distributed according to the terms of
+ the GNU General Public License, incorporated herein by reference.
+
+ This driver derives from Donald Becker's 3c509 core, which has the
+ following copyright:
+ Copyright 1993 United States Government as represented by the
+ Director, National Security Agency.
+
+
+*/
+
+/*
+ Theory of Operation
+
+I. Board Compatibility
+
+This device driver is designed for the 3Com 3c574 PC card Fast Ethernet
+Adapter.
+
+II. Board-specific settings
+
+None -- PC cards are autoconfigured.
+
+III. Driver operation
+
+The 3c574 uses a Boomerang-style interface, without the bus-master capability.
+See the Boomerang driver and documentation for most details.
+
+IV. Notes and chip documentation.
+
+Two added registers are used to enhance PIO performance, RunnerRdCtrl and
+RunnerWrCtrl. These are 11 bit down-counters that are preloaded with the
+count of word (16 bits) reads or writes the driver is about to do to the Rx
+or Tx FIFO. The chip is then able to hide the internal-PCI-bus to PC-card
+translation latency by buffering the I/O operations with an 8 word FIFO.
+Note: No other chip accesses are permitted when this buffer is used.
+
+A second enhancement is that both attribute and common memory space
+0x0800-0x0fff can translated to the PIO FIFO. Thus memory operations (faster
+with *some* PCcard bridges) may be used instead of I/O operations.
+This is enabled by setting the 0x10 bit in the PCMCIA LAN COR.
+
+Some slow PC card bridges work better if they never see a WAIT signal.
+This is configured by setting the 0x20 bit in the PCMCIA LAN COR.
+Only do this after testing that it is reliable and improves performance.
+
+The upper five bits of RunnerRdCtrl are used to window into PCcard
+configuration space registers. Window 0 is the regular Boomerang/Odie
+register set, 1-5 are various PC card control registers, and 16-31 are
+the (reversed!) CIS table.
+
+A final note: writing the InternalConfig register in window 3 with an
+invalid ramWidth is Very Bad.
+
+V. References
+
+http://www.scyld.com/expert/NWay.html
+http://www.national.com/opf/DP/DP83840A.html
+
+Thanks to Terry Murphy of 3Com for providing development information for
+earlier 3Com products.
+
+*/
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/timer.h>
+#include <linux/interrupt.h>
+#include <linux/in.h>
+#include <linux/delay.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/if_arp.h>
+#include <linux/ioport.h>
+#include <linux/bitops.h>
+#include <linux/mii.h>
+
+#include <pcmcia/cistpl.h>
+#include <pcmcia/cisreg.h>
+#include <pcmcia/ciscode.h>
+#include <pcmcia/ds.h>
+
+#include <asm/uaccess.h>
+#include <asm/io.h>
+#include <asm/system.h>
+
+/*====================================================================*/
+
+/* Module parameters */
+
+MODULE_AUTHOR("David Hinds <dahinds@users.sourceforge.net>");
+MODULE_DESCRIPTION("3Com 3c574 series PCMCIA ethernet driver");
+MODULE_LICENSE("GPL");
+
+#define INT_MODULE_PARM(n, v) static int n = v; module_param(n, int, 0)
+
+/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
+INT_MODULE_PARM(max_interrupt_work, 32);
+
+/* Force full duplex modes? */
+INT_MODULE_PARM(full_duplex, 0);
+
+/* Autodetect link polarity reversal? */
+INT_MODULE_PARM(auto_polarity, 1);
+
+
+/*====================================================================*/
+
+/* Time in jiffies before concluding the transmitter is hung. */
+#define TX_TIMEOUT ((800*HZ)/1000)
+
+/* To minimize the size of the driver source and make the driver more
+ readable not all constants are symbolically defined.
+ You'll need the manual if you want to understand driver details anyway. */
+/* Offsets from base I/O address. */
+#define EL3_DATA 0x00
+#define EL3_CMD 0x0e
+#define EL3_STATUS 0x0e
+
+#define EL3WINDOW(win_num) outw(SelectWindow + (win_num), ioaddr + EL3_CMD)
+
+/* The top five bits written to EL3_CMD are a command, the lower
+ 11 bits are the parameter, if applicable. */
+enum el3_cmds {
+ TotalReset = 0<<11, SelectWindow = 1<<11, StartCoax = 2<<11,
+ RxDisable = 3<<11, RxEnable = 4<<11, RxReset = 5<<11, RxDiscard = 8<<11,
+ TxEnable = 9<<11, TxDisable = 10<<11, TxReset = 11<<11,
+ FakeIntr = 12<<11, AckIntr = 13<<11, SetIntrEnb = 14<<11,
+ SetStatusEnb = 15<<11, SetRxFilter = 16<<11, SetRxThreshold = 17<<11,
+ SetTxThreshold = 18<<11, SetTxStart = 19<<11, StatsEnable = 21<<11,
+ StatsDisable = 22<<11, StopCoax = 23<<11,
+};
+
+enum elxl_status {
+ IntLatch = 0x0001, AdapterFailure = 0x0002, TxComplete = 0x0004,
+ TxAvailable = 0x0008, RxComplete = 0x0010, RxEarly = 0x0020,
+ IntReq = 0x0040, StatsFull = 0x0080, CmdBusy = 0x1000 };
+
+/* The SetRxFilter command accepts the following classes: */
+enum RxFilter {
+ RxStation = 1, RxMulticast = 2, RxBroadcast = 4, RxProm = 8
+};
+
+enum Window0 {
+ Wn0EepromCmd = 10, Wn0EepromData = 12, /* EEPROM command/address, data. */
+ IntrStatus=0x0E, /* Valid in all windows. */
+};
+/* These assumes the larger EEPROM. */
+enum Win0_EEPROM_cmds {
+ EEPROM_Read = 0x200, EEPROM_WRITE = 0x100, EEPROM_ERASE = 0x300,
+ EEPROM_EWENB = 0x30, /* Enable erasing/writing for 10 msec. */
+ EEPROM_EWDIS = 0x00, /* Disable EWENB before 10 msec timeout. */
+};
+
+/* Register window 1 offsets, the window used in normal operation.
+ On the "Odie" this window is always mapped at offsets 0x10-0x1f.
+ Except for TxFree, which is overlapped by RunnerWrCtrl. */
+enum Window1 {
+ TX_FIFO = 0x10, RX_FIFO = 0x10, RxErrors = 0x14,
+ RxStatus = 0x18, Timer=0x1A, TxStatus = 0x1B,
+ TxFree = 0x0C, /* Remaining free bytes in Tx buffer. */
+ RunnerRdCtrl = 0x16, RunnerWrCtrl = 0x1c,
+};
+
+enum Window3 { /* Window 3: MAC/config bits. */
+ Wn3_Config=0, Wn3_MAC_Ctrl=6, Wn3_Options=8,
+};
+enum wn3_config {
+ Ram_size = 7,
+ Ram_width = 8,
+ Ram_speed = 0x30,
+ Rom_size = 0xc0,
+ Ram_split_shift = 16,
+ Ram_split = 3 << Ram_split_shift,
+ Xcvr_shift = 20,
+ Xcvr = 7 << Xcvr_shift,
+ Autoselect = 0x1000000,
+};
+
+enum Window4 { /* Window 4: Xcvr/media bits. */
+ Wn4_FIFODiag = 4, Wn4_NetDiag = 6, Wn4_PhysicalMgmt=8, Wn4_Media = 10,
+};
+
+#define MEDIA_TP 0x00C0 /* Enable link beat and jabber for 10baseT. */
+
+struct el3_private {
+ struct pcmcia_device *p_dev;
+ u16 advertising, partner; /* NWay media advertisement */
+ unsigned char phys; /* MII device address */
+ unsigned int autoselect:1, default_media:3; /* Read from the EEPROM/Wn3_Config. */
+ /* for transceiver monitoring */
+ struct timer_list media;
+ unsigned short media_status;
+ unsigned short fast_poll;
+ unsigned long last_irq;
+ spinlock_t window_lock; /* Guards the Window selection */
+};
+
+/* Set iff a MII transceiver on any interface requires mdio preamble.
+ This only set with the original DP83840 on older 3c905 boards, so the extra
+ code size of a per-interface flag is not worthwhile. */
+static char mii_preamble_required = 0;
+
+/* Index of functions. */
+
+static int tc574_config(struct pcmcia_device *link);
+static void tc574_release(struct pcmcia_device *link);
+
+static void mdio_sync(unsigned int ioaddr, int bits);
+static int mdio_read(unsigned int ioaddr, int phy_id, int location);
+static void mdio_write(unsigned int ioaddr, int phy_id, int location,
+ int value);
+static unsigned short read_eeprom(unsigned int ioaddr, int index);
+static void tc574_wait_for_completion(struct net_device *dev, int cmd);
+
+static void tc574_reset(struct net_device *dev);
+static void media_check(unsigned long arg);
+static int el3_open(struct net_device *dev);
+static netdev_tx_t el3_start_xmit(struct sk_buff *skb,
+ struct net_device *dev);
+static irqreturn_t el3_interrupt(int irq, void *dev_id);
+static void update_stats(struct net_device *dev);
+static struct net_device_stats *el3_get_stats(struct net_device *dev);
+static int el3_rx(struct net_device *dev, int worklimit);
+static int el3_close(struct net_device *dev);
+static void el3_tx_timeout(struct net_device *dev);
+static int el3_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
+static void set_rx_mode(struct net_device *dev);
+static void set_multicast_list(struct net_device *dev);
+
+static void tc574_detach(struct pcmcia_device *p_dev);
+
+/*
+ tc574_attach() creates an "instance" of the driver, allocating
+ local data structures for one device. The device is registered
+ with Card Services.
+*/
+static const struct net_device_ops el3_netdev_ops = {
+ .ndo_open = el3_open,
+ .ndo_stop = el3_close,
+ .ndo_start_xmit = el3_start_xmit,
+ .ndo_tx_timeout = el3_tx_timeout,
+ .ndo_get_stats = el3_get_stats,
+ .ndo_do_ioctl = el3_ioctl,
+ .ndo_set_multicast_list = set_multicast_list,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+};
+
+static int tc574_probe(struct pcmcia_device *link)
+{
+ struct el3_private *lp;
+ struct net_device *dev;
+
+ dev_dbg(&link->dev, "3c574_attach()\n");
+
+ /* Create the PC card device object. */
+ dev = alloc_etherdev(sizeof(struct el3_private));
+ if (!dev)
+ return -ENOMEM;
+ lp = netdev_priv(dev);
+ link->priv = dev;
+ lp->p_dev = link;
+
+ spin_lock_init(&lp->window_lock);
+ link->resource[0]->end = 32;
+ link->resource[0]->flags |= IO_DATA_PATH_WIDTH_16;
+ link->config_flags |= CONF_ENABLE_IRQ;
+ link->config_index = 1;
+
+ dev->netdev_ops = &el3_netdev_ops;
+ dev->watchdog_timeo = TX_TIMEOUT;
+
+ return tc574_config(link);
+}
+
+static void tc574_detach(struct pcmcia_device *link)
+{
+ struct net_device *dev = link->priv;
+
+ dev_dbg(&link->dev, "3c574_detach()\n");
+
+ unregister_netdev(dev);
+
+ tc574_release(link);
+
+ free_netdev(dev);
+} /* tc574_detach */
+
+static const char *ram_split[] = {"5:3", "3:1", "1:1", "3:5"};
+
+static int tc574_config(struct pcmcia_device *link)
+{
+ struct net_device *dev = link->priv;
+ struct el3_private *lp = netdev_priv(dev);
+ int ret, i, j;
+ unsigned int ioaddr;
+ __be16 *phys_addr;
+ char *cardname;
+ __u32 config;
+ u8 *buf;
+ size_t len;
+
+ phys_addr = (__be16 *)dev->dev_addr;
+
+ dev_dbg(&link->dev, "3c574_config()\n");
+
+ link->io_lines = 16;
+
+ for (i = j = 0; j < 0x400; j += 0x20) {
+ link->resource[0]->start = j ^ 0x300;
+ i = pcmcia_request_io(link);
+ if (i == 0)
+ break;
+ }
+ if (i != 0)
+ goto failed;
+
+ ret = pcmcia_request_irq(link, el3_interrupt);
+ if (ret)
+ goto failed;
+
+ ret = pcmcia_enable_device(link);
+ if (ret)
+ goto failed;
+
+ dev->irq = link->irq;
+ dev->base_addr = link->resource[0]->start;
+
+ ioaddr = dev->base_addr;
+
+ /* The 3c574 normally uses an EEPROM for configuration info, including
+ the hardware address. The future products may include a modem chip
+ and put the address in the CIS. */
+
+ len = pcmcia_get_tuple(link, 0x88, &buf);
+ if (buf && len >= 6) {
+ for (i = 0; i < 3; i++)
+ phys_addr[i] = htons(le16_to_cpu(buf[i * 2]));
+ kfree(buf);
+ } else {
+ kfree(buf); /* 0 < len < 6 */
+ EL3WINDOW(0);
+ for (i = 0; i < 3; i++)
+ phys_addr[i] = htons(read_eeprom(ioaddr, i + 10));
+ if (phys_addr[0] == htons(0x6060)) {
+ pr_notice("IO port conflict at 0x%03lx-0x%03lx\n",
+ dev->base_addr, dev->base_addr+15);
+ goto failed;
+ }
+ }
+ if (link->prod_id[1])
+ cardname = link->prod_id[1];
+ else
+ cardname = "3Com 3c574";
+
+ {
+ u_char mcr;
+ outw(2<<11, ioaddr + RunnerRdCtrl);
+ mcr = inb(ioaddr + 2);
+ outw(0<<11, ioaddr + RunnerRdCtrl);
+ pr_info(" ASIC rev %d,", mcr>>3);
+ EL3WINDOW(3);
+ config = inl(ioaddr + Wn3_Config);
+ lp->default_media = (config & Xcvr) >> Xcvr_shift;
+ lp->autoselect = config & Autoselect ? 1 : 0;
+ }
+
+ init_timer(&lp->media);
+
+ {
+ int phy;
+
+ /* Roadrunner only: Turn on the MII transceiver */
+ outw(0x8040, ioaddr + Wn3_Options);
+ mdelay(1);
+ outw(0xc040, ioaddr + Wn3_Options);
+ tc574_wait_for_completion(dev, TxReset);
+ tc574_wait_for_completion(dev, RxReset);
+ mdelay(1);
+ outw(0x8040, ioaddr + Wn3_Options);
+
+ EL3WINDOW(4);
+ for (phy = 1; phy <= 32; phy++) {
+ int mii_status;
+ mdio_sync(ioaddr, 32);
+ mii_status = mdio_read(ioaddr, phy & 0x1f, 1);
+ if (mii_status != 0xffff) {
+ lp->phys = phy & 0x1f;
+ dev_dbg(&link->dev, " MII transceiver at "
+ "index %d, status %x.\n",
+ phy, mii_status);
+ if ((mii_status & 0x0040) == 0)
+ mii_preamble_required = 1;
+ break;
+ }
+ }
+ if (phy > 32) {
+ pr_notice(" No MII transceivers found!\n");
+ goto failed;
+ }
+ i = mdio_read(ioaddr, lp->phys, 16) | 0x40;
+ mdio_write(ioaddr, lp->phys, 16, i);
+ lp->advertising = mdio_read(ioaddr, lp->phys, 4);
+ if (full_duplex) {
+ /* Only advertise the FD media types. */
+ lp->advertising &= ~0x02a0;
+ mdio_write(ioaddr, lp->phys, 4, lp->advertising);
+ }
+ }
+
+ SET_NETDEV_DEV(dev, &link->dev);
+
+ if (register_netdev(dev) != 0) {
+ pr_notice("register_netdev() failed\n");
+ goto failed;
+ }
+
+ netdev_info(dev, "%s at io %#3lx, irq %d, hw_addr %pM\n",
+ cardname, dev->base_addr, dev->irq, dev->dev_addr);
+ netdev_info(dev, " %dK FIFO split %s Rx:Tx, %sMII interface.\n",
+ 8 << config & Ram_size,
+ ram_split[(config & Ram_split) >> Ram_split_shift],
+ config & Autoselect ? "autoselect " : "");
+
+ return 0;
+
+failed:
+ tc574_release(link);
+ return -ENODEV;
+
+} /* tc574_config */
+
+static void tc574_release(struct pcmcia_device *link)
+{
+ pcmcia_disable_device(link);
+}
+
+static int tc574_suspend(struct pcmcia_device *link)
+{
+ struct net_device *dev = link->priv;
+
+ if (link->open)
+ netif_device_detach(dev);
+
+ return 0;
+}
+
+static int tc574_resume(struct pcmcia_device *link)
+{
+ struct net_device *dev = link->priv;
+
+ if (link->open) {
+ tc574_reset(dev);
+ netif_device_attach(dev);
+ }
+
+ return 0;
+}
+
+static void dump_status(struct net_device *dev)
+{
+ unsigned int ioaddr = dev->base_addr;
+ EL3WINDOW(1);
+ netdev_info(dev, " irq status %04x, rx status %04x, tx status %02x, tx free %04x\n",
+ inw(ioaddr+EL3_STATUS),
+ inw(ioaddr+RxStatus), inb(ioaddr+TxStatus),
+ inw(ioaddr+TxFree));
+ EL3WINDOW(4);
+ netdev_info(dev, " diagnostics: fifo %04x net %04x ethernet %04x media %04x\n",
+ inw(ioaddr+0x04), inw(ioaddr+0x06),
+ inw(ioaddr+0x08), inw(ioaddr+0x0a));
+ EL3WINDOW(1);
+}
+
+/*
+ Use this for commands that may take time to finish
+*/
+static void tc574_wait_for_completion(struct net_device *dev, int cmd)
+{
+ int i = 1500;
+ outw(cmd, dev->base_addr + EL3_CMD);
+ while (--i > 0)
+ if (!(inw(dev->base_addr + EL3_STATUS) & 0x1000)) break;
+ if (i == 0)
+ netdev_notice(dev, "command 0x%04x did not complete!\n", cmd);
+}
+
+/* Read a word from the EEPROM using the regular EEPROM access register.
+ Assume that we are in register window zero.
+ */
+static unsigned short read_eeprom(unsigned int ioaddr, int index)
+{
+ int timer;
+ outw(EEPROM_Read + index, ioaddr + Wn0EepromCmd);
+ /* Pause for at least 162 usec for the read to take place. */
+ for (timer = 1620; timer >= 0; timer--) {
+ if ((inw(ioaddr + Wn0EepromCmd) & 0x8000) == 0)
+ break;
+ }
+ return inw(ioaddr + Wn0EepromData);
+}
+
+/* MII transceiver control section.
+ Read and write the MII registers using software-generated serial
+ MDIO protocol. See the MII specifications or DP83840A data sheet
+ for details.
+ The maxium data clock rate is 2.5 Mhz. The timing is easily met by the
+ slow PC card interface. */
+
+#define MDIO_SHIFT_CLK 0x01
+#define MDIO_DIR_WRITE 0x04
+#define MDIO_DATA_WRITE0 (0x00 | MDIO_DIR_WRITE)
+#define MDIO_DATA_WRITE1 (0x02 | MDIO_DIR_WRITE)
+#define MDIO_DATA_READ 0x02
+#define MDIO_ENB_IN 0x00
+
+/* Generate the preamble required for initial synchronization and
+ a few older transceivers. */
+static void mdio_sync(unsigned int ioaddr, int bits)
+{
+ unsigned int mdio_addr = ioaddr + Wn4_PhysicalMgmt;
+
+ /* Establish sync by sending at least 32 logic ones. */
+ while (-- bits >= 0) {
+ outw(MDIO_DATA_WRITE1, mdio_addr);
+ outw(MDIO_DATA_WRITE1 | MDIO_SHIFT_CLK, mdio_addr);
+ }
+}
+
+static int mdio_read(unsigned int ioaddr, int phy_id, int location)
+{
+ int i;
+ int read_cmd = (0xf6 << 10) | (phy_id << 5) | location;
+ unsigned int retval = 0;
+ unsigned int mdio_addr = ioaddr + Wn4_PhysicalMgmt;
+
+ if (mii_preamble_required)
+ mdio_sync(ioaddr, 32);
+
+ /* Shift the read command bits out. */
+ for (i = 14; i >= 0; i--) {
+ int dataval = (read_cmd&(1<<i)) ? MDIO_DATA_WRITE1 : MDIO_DATA_WRITE0;
+ outw(dataval, mdio_addr);
+ outw(dataval | MDIO_SHIFT_CLK, mdio_addr);
+ }
+ /* Read the two transition, 16 data, and wire-idle bits. */
+ for (i = 19; i > 0; i--) {
+ outw(MDIO_ENB_IN, mdio_addr);
+ retval = (retval << 1) | ((inw(mdio_addr) & MDIO_DATA_READ) ? 1 : 0);
+ outw(MDIO_ENB_IN | MDIO_SHIFT_CLK, mdio_addr);
+ }
+ return (retval>>1) & 0xffff;
+}
+
+static void mdio_write(unsigned int ioaddr, int phy_id, int location, int value)
+{
+ int write_cmd = 0x50020000 | (phy_id << 23) | (location << 18) | value;
+ unsigned int mdio_addr = ioaddr + Wn4_PhysicalMgmt;
+ int i;
+
+ if (mii_preamble_required)
+ mdio_sync(ioaddr, 32);
+
+ /* Shift the command bits out. */
+ for (i = 31; i >= 0; i--) {
+ int dataval = (write_cmd&(1<<i)) ? MDIO_DATA_WRITE1 : MDIO_DATA_WRITE0;
+ outw(dataval, mdio_addr);
+ outw(dataval | MDIO_SHIFT_CLK, mdio_addr);
+ }
+ /* Leave the interface idle. */
+ for (i = 1; i >= 0; i--) {
+ outw(MDIO_ENB_IN, mdio_addr);
+ outw(MDIO_ENB_IN | MDIO_SHIFT_CLK, mdio_addr);
+ }
+}
+
+/* Reset and restore all of the 3c574 registers. */
+static void tc574_reset(struct net_device *dev)
+{
+ struct el3_private *lp = netdev_priv(dev);
+ int i;
+ unsigned int ioaddr = dev->base_addr;
+ unsigned long flags;
+
+ tc574_wait_for_completion(dev, TotalReset|0x10);
+
+ spin_lock_irqsave(&lp->window_lock, flags);
+ /* Clear any transactions in progress. */
+ outw(0, ioaddr + RunnerWrCtrl);
+ outw(0, ioaddr + RunnerRdCtrl);
+
+ /* Set the station address and mask. */
+ EL3WINDOW(2);
+ for (i = 0; i < 6; i++)
+ outb(dev->dev_addr[i], ioaddr + i);
+ for (; i < 12; i+=2)
+ outw(0, ioaddr + i);
+
+ /* Reset config options */
+ EL3WINDOW(3);
+ outb((dev->mtu > 1500 ? 0x40 : 0), ioaddr + Wn3_MAC_Ctrl);
+ outl((lp->autoselect ? 0x01000000 : 0) | 0x0062001b,
+ ioaddr + Wn3_Config);
+ /* Roadrunner only: Turn on the MII transceiver. */
+ outw(0x8040, ioaddr + Wn3_Options);
+ mdelay(1);
+ outw(0xc040, ioaddr + Wn3_Options);
+ EL3WINDOW(1);
+ spin_unlock_irqrestore(&lp->window_lock, flags);
+
+ tc574_wait_for_completion(dev, TxReset);
+ tc574_wait_for_completion(dev, RxReset);
+ mdelay(1);
+ spin_lock_irqsave(&lp->window_lock, flags);
+ EL3WINDOW(3);
+ outw(0x8040, ioaddr + Wn3_Options);
+
+ /* Switch to the stats window, and clear all stats by reading. */
+ outw(StatsDisable, ioaddr + EL3_CMD);
+ EL3WINDOW(6);
+ for (i = 0; i < 10; i++)
+ inb(ioaddr + i);
+ inw(ioaddr + 10);
+ inw(ioaddr + 12);
+ EL3WINDOW(4);
+ inb(ioaddr + 12);
+ inb(ioaddr + 13);
+
+ /* .. enable any extra statistics bits.. */
+ outw(0x0040, ioaddr + Wn4_NetDiag);
+
+ EL3WINDOW(1);
+ spin_unlock_irqrestore(&lp->window_lock, flags);
+
+ /* .. re-sync MII and re-fill what NWay is advertising. */
+ mdio_sync(ioaddr, 32);
+ mdio_write(ioaddr, lp->phys, 4, lp->advertising);
+ if (!auto_polarity) {
+ /* works for TDK 78Q2120 series MII's */
+ i = mdio_read(ioaddr, lp->phys, 16) | 0x20;
+ mdio_write(ioaddr, lp->phys, 16, i);
+ }
+
+ spin_lock_irqsave(&lp->window_lock, flags);
+ /* Switch to register set 1 for normal use, just for TxFree. */
+ set_rx_mode(dev);
+ spin_unlock_irqrestore(&lp->window_lock, flags);
+ outw(StatsEnable, ioaddr + EL3_CMD); /* Turn on statistics. */
+ outw(RxEnable, ioaddr + EL3_CMD); /* Enable the receiver. */
+ outw(TxEnable, ioaddr + EL3_CMD); /* Enable transmitter. */
+ /* Allow status bits to be seen. */
+ outw(SetStatusEnb | 0xff, ioaddr + EL3_CMD);
+ /* Ack all pending events, and set active indicator mask. */
+ outw(AckIntr | IntLatch | TxAvailable | RxEarly | IntReq,
+ ioaddr + EL3_CMD);
+ outw(SetIntrEnb | IntLatch | TxAvailable | RxComplete | StatsFull
+ | AdapterFailure | RxEarly, ioaddr + EL3_CMD);
+}
+
+static int el3_open(struct net_device *dev)
+{
+ struct el3_private *lp = netdev_priv(dev);
+ struct pcmcia_device *link = lp->p_dev;
+
+ if (!pcmcia_dev_present(link))
+ return -ENODEV;
+
+ link->open++;
+ netif_start_queue(dev);
+
+ tc574_reset(dev);
+ lp->media.function = media_check;
+ lp->media.data = (unsigned long) dev;
+ lp->media.expires = jiffies + HZ;
+ add_timer(&lp->media);
+
+ dev_dbg(&link->dev, "%s: opened, status %4.4x.\n",
+ dev->name, inw(dev->base_addr + EL3_STATUS));
+
+ return 0;
+}
+
+static void el3_tx_timeout(struct net_device *dev)
+{
+ unsigned int ioaddr = dev->base_addr;
+
+ netdev_notice(dev, "Transmit timed out!\n");
+ dump_status(dev);
+ dev->stats.tx_errors++;
+ dev->trans_start = jiffies; /* prevent tx timeout */
+ /* Issue TX_RESET and TX_START commands. */
+ tc574_wait_for_completion(dev, TxReset);
+ outw(TxEnable, ioaddr + EL3_CMD);
+ netif_wake_queue(dev);
+}
+
+static void pop_tx_status(struct net_device *dev)
+{
+ unsigned int ioaddr = dev->base_addr;
+ int i;
+
+ /* Clear the Tx status stack. */
+ for (i = 32; i > 0; i--) {
+ u_char tx_status = inb(ioaddr + TxStatus);
+ if (!(tx_status & 0x84))
+ break;
+ /* reset transmitter on jabber error or underrun */
+ if (tx_status & 0x30)
+ tc574_wait_for_completion(dev, TxReset);
+ if (tx_status & 0x38) {
+ pr_debug("%s: transmit error: status 0x%02x\n",
+ dev->name, tx_status);
+ outw(TxEnable, ioaddr + EL3_CMD);
+ dev->stats.tx_aborted_errors++;
+ }
+ outb(0x00, ioaddr + TxStatus); /* Pop the status stack. */
+ }
+}
+
+static netdev_tx_t el3_start_xmit(struct sk_buff *skb,
+ struct net_device *dev)
+{
+ unsigned int ioaddr = dev->base_addr;
+ struct el3_private *lp = netdev_priv(dev);
+ unsigned long flags;
+
+ pr_debug("%s: el3_start_xmit(length = %ld) called, "
+ "status %4.4x.\n", dev->name, (long)skb->len,
+ inw(ioaddr + EL3_STATUS));
+
+ spin_lock_irqsave(&lp->window_lock, flags);
+
+ dev->stats.tx_bytes += skb->len;
+
+ /* Put out the doubleword header... */
+ outw(skb->len, ioaddr + TX_FIFO);
+ outw(0, ioaddr + TX_FIFO);
+ /* ... and the packet rounded to a doubleword. */
+ outsl(ioaddr + TX_FIFO, skb->data, (skb->len+3)>>2);
+
+ /* TxFree appears only in Window 1, not offset 0x1c. */
+ if (inw(ioaddr + TxFree) <= 1536) {
+ netif_stop_queue(dev);
+ /* Interrupt us when the FIFO has room for max-sized packet.
+ The threshold is in units of dwords. */
+ outw(SetTxThreshold + (1536>>2), ioaddr + EL3_CMD);
+ }
+
+ pop_tx_status(dev);
+ spin_unlock_irqrestore(&lp->window_lock, flags);
+ dev_kfree_skb(skb);
+ return NETDEV_TX_OK;
+}
+
+/* The EL3 interrupt handler. */
+static irqreturn_t el3_interrupt(int irq, void *dev_id)
+{
+ struct net_device *dev = (struct net_device *) dev_id;
+ struct el3_private *lp = netdev_priv(dev);
+ unsigned int ioaddr;
+ unsigned status;
+ int work_budget = max_interrupt_work;
+ int handled = 0;
+
+ if (!netif_device_present(dev))
+ return IRQ_NONE;
+ ioaddr = dev->base_addr;
+
+ pr_debug("%s: interrupt, status %4.4x.\n",
+ dev->name, inw(ioaddr + EL3_STATUS));
+
+ spin_lock(&lp->window_lock);
+
+ while ((status = inw(ioaddr + EL3_STATUS)) &
+ (IntLatch | RxComplete | RxEarly | StatsFull)) {
+ if (!netif_device_present(dev) ||
+ ((status & 0xe000) != 0x2000)) {
+ pr_debug("%s: Interrupt from dead card\n", dev->name);
+ break;
+ }
+
+ handled = 1;
+
+ if (status & RxComplete)
+ work_budget = el3_rx(dev, work_budget);
+
+ if (status & TxAvailable) {
+ pr_debug(" TX room bit was handled.\n");
+ /* There's room in the FIFO for a full-sized packet. */
+ outw(AckIntr | TxAvailable, ioaddr + EL3_CMD);
+ netif_wake_queue(dev);
+ }
+
+ if (status & TxComplete)
+ pop_tx_status(dev);
+
+ if (status & (AdapterFailure | RxEarly | StatsFull)) {
+ /* Handle all uncommon interrupts. */
+ if (status & StatsFull)
+ update_stats(dev);
+ if (status & RxEarly) {
+ work_budget = el3_rx(dev, work_budget);
+ outw(AckIntr | RxEarly, ioaddr + EL3_CMD);
+ }
+ if (status & AdapterFailure) {
+ u16 fifo_diag;
+ EL3WINDOW(4);
+ fifo_diag = inw(ioaddr + Wn4_FIFODiag);
+ EL3WINDOW(1);
+ netdev_notice(dev, "adapter failure, FIFO diagnostic register %04x\n",
+ fifo_diag);
+ if (fifo_diag & 0x0400) {
+ /* Tx overrun */
+ tc574_wait_for_completion(dev, TxReset);
+ outw(TxEnable, ioaddr + EL3_CMD);
+ }
+ if (fifo_diag & 0x2000) {
+ /* Rx underrun */
+ tc574_wait_for_completion(dev, RxReset);
+ set_rx_mode(dev);
+ outw(RxEnable, ioaddr + EL3_CMD);
+ }
+ outw(AckIntr | AdapterFailure, ioaddr + EL3_CMD);
+ }
+ }
+
+ if (--work_budget < 0) {
+ pr_debug("%s: Too much work in interrupt, "
+ "status %4.4x.\n", dev->name, status);
+ /* Clear all interrupts */
+ outw(AckIntr | 0xFF, ioaddr + EL3_CMD);
+ break;
+ }
+ /* Acknowledge the IRQ. */
+ outw(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
+ }
+
+ pr_debug("%s: exiting interrupt, status %4.4x.\n",
+ dev->name, inw(ioaddr + EL3_STATUS));
+
+ spin_unlock(&lp->window_lock);
+ return IRQ_RETVAL(handled);
+}
+
+/*
+ This timer serves two purposes: to check for missed interrupts
+ (and as a last resort, poll the NIC for events), and to monitor
+ the MII, reporting changes in cable status.
+*/
+static void media_check(unsigned long arg)
+{
+ struct net_device *dev = (struct net_device *) arg;
+ struct el3_private *lp = netdev_priv(dev);
+ unsigned int ioaddr = dev->base_addr;
+ unsigned long flags;
+ unsigned short /* cable, */ media, partner;
+
+ if (!netif_device_present(dev))
+ goto reschedule;
+
+ /* Check for pending interrupt with expired latency timer: with
+ this, we can limp along even if the interrupt is blocked */
+ if ((inw(ioaddr + EL3_STATUS) & IntLatch) && (inb(ioaddr + Timer) == 0xff)) {
+ if (!lp->fast_poll)
+ netdev_info(dev, "interrupt(s) dropped!\n");
+
+ local_irq_save(flags);
+ el3_interrupt(dev->irq, dev);
+ local_irq_restore(flags);
+
+ lp->fast_poll = HZ;
+ }
+ if (lp->fast_poll) {
+ lp->fast_poll--;
+ lp->media.expires = jiffies + 2*HZ/100;
+ add_timer(&lp->media);
+ return;
+ }
+
+ spin_lock_irqsave(&lp->window_lock, flags);
+ EL3WINDOW(4);
+ media = mdio_read(ioaddr, lp->phys, 1);
+ partner = mdio_read(ioaddr, lp->phys, 5);
+ EL3WINDOW(1);
+
+ if (media != lp->media_status) {
+ if ((media ^ lp->media_status) & 0x0004)
+ netdev_info(dev, "%s link beat\n",
+ (lp->media_status & 0x0004) ? "lost" : "found");
+ if ((media ^ lp->media_status) & 0x0020) {
+ lp->partner = 0;
+ if (lp->media_status & 0x0020) {
+ netdev_info(dev, "autonegotiation restarted\n");
+ } else if (partner) {
+ partner &= lp->advertising;
+ lp->partner = partner;
+ netdev_info(dev, "autonegotiation complete: "
+ "%dbaseT-%cD selected\n",
+ (partner & 0x0180) ? 100 : 10,
+ (partner & 0x0140) ? 'F' : 'H');
+ } else {
+ netdev_info(dev, "link partner did not autonegotiate\n");
+ }
+
+ EL3WINDOW(3);
+ outb((partner & 0x0140 ? 0x20 : 0) |
+ (dev->mtu > 1500 ? 0x40 : 0), ioaddr + Wn3_MAC_Ctrl);
+ EL3WINDOW(1);
+
+ }
+ if (media & 0x0010)
+ netdev_info(dev, "remote fault detected\n");
+ if (media & 0x0002)
+ netdev_info(dev, "jabber detected\n");
+ lp->media_status = media;
+ }
+ spin_unlock_irqrestore(&lp->window_lock, flags);
+
+reschedule:
+ lp->media.expires = jiffies + HZ;
+ add_timer(&lp->media);
+}
+
+static struct net_device_stats *el3_get_stats(struct net_device *dev)
+{
+ struct el3_private *lp = netdev_priv(dev);
+
+ if (netif_device_present(dev)) {
+ unsigned long flags;
+ spin_lock_irqsave(&lp->window_lock, flags);
+ update_stats(dev);
+ spin_unlock_irqrestore(&lp->window_lock, flags);
+ }
+ return &dev->stats;
+}
+
+/* Update statistics.
+ Surprisingly this need not be run single-threaded, but it effectively is.
+ The counters clear when read, so the adds must merely be atomic.
+ */
+static void update_stats(struct net_device *dev)
+{
+ unsigned int ioaddr = dev->base_addr;
+ u8 rx, tx, up;
+
+ pr_debug("%s: updating the statistics.\n", dev->name);
+
+ if (inw(ioaddr+EL3_STATUS) == 0xffff) /* No card. */
+ return;
+
+ /* Unlike the 3c509 we need not turn off stats updates while reading. */
+ /* Switch to the stats window, and read everything. */
+ EL3WINDOW(6);
+ dev->stats.tx_carrier_errors += inb(ioaddr + 0);
+ dev->stats.tx_heartbeat_errors += inb(ioaddr + 1);
+ /* Multiple collisions. */ inb(ioaddr + 2);
+ dev->stats.collisions += inb(ioaddr + 3);
+ dev->stats.tx_window_errors += inb(ioaddr + 4);
+ dev->stats.rx_fifo_errors += inb(ioaddr + 5);
+ dev->stats.tx_packets += inb(ioaddr + 6);
+ up = inb(ioaddr + 9);
+ dev->stats.tx_packets += (up&0x30) << 4;
+ /* Rx packets */ inb(ioaddr + 7);
+ /* Tx deferrals */ inb(ioaddr + 8);
+ rx = inw(ioaddr + 10);
+ tx = inw(ioaddr + 12);
+
+ EL3WINDOW(4);
+ /* BadSSD */ inb(ioaddr + 12);
+ up = inb(ioaddr + 13);
+
+ EL3WINDOW(1);
+}
+
+static int el3_rx(struct net_device *dev, int worklimit)
+{
+ unsigned int ioaddr = dev->base_addr;
+ short rx_status;
+
+ pr_debug("%s: in rx_packet(), status %4.4x, rx_status %4.4x.\n",
+ dev->name, inw(ioaddr+EL3_STATUS), inw(ioaddr+RxStatus));
+ while (!((rx_status = inw(ioaddr + RxStatus)) & 0x8000) &&
+ worklimit > 0) {
+ worklimit--;
+ if (rx_status & 0x4000) { /* Error, update stats. */
+ short error = rx_status & 0x3800;
+ dev->stats.rx_errors++;
+ switch (error) {
+ case 0x0000: dev->stats.rx_over_errors++; break;
+ case 0x0800: dev->stats.rx_length_errors++; break;
+ case 0x1000: dev->stats.rx_frame_errors++; break;
+ case 0x1800: dev->stats.rx_length_errors++; break;
+ case 0x2000: dev->stats.rx_frame_errors++; break;
+ case 0x2800: dev->stats.rx_crc_errors++; break;
+ }
+ } else {
+ short pkt_len = rx_status & 0x7ff;
+ struct sk_buff *skb;
+
+ skb = dev_alloc_skb(pkt_len+5);
+
+ pr_debug(" Receiving packet size %d status %4.4x.\n",
+ pkt_len, rx_status);
+ if (skb != NULL) {
+ skb_reserve(skb, 2);
+ insl(ioaddr+RX_FIFO, skb_put(skb, pkt_len),
+ ((pkt_len+3)>>2));
+ skb->protocol = eth_type_trans(skb, dev);
+ netif_rx(skb);
+ dev->stats.rx_packets++;
+ dev->stats.rx_bytes += pkt_len;
+ } else {
+ pr_debug("%s: couldn't allocate a sk_buff of"
+ " size %d.\n", dev->name, pkt_len);
+ dev->stats.rx_dropped++;
+ }
+ }
+ tc574_wait_for_completion(dev, RxDiscard);
+ }
+
+ return worklimit;
+}
+
+/* Provide ioctl() calls to examine the MII xcvr state. */
+static int el3_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+{
+ struct el3_private *lp = netdev_priv(dev);
+ unsigned int ioaddr = dev->base_addr;
+ struct mii_ioctl_data *data = if_mii(rq);
+ int phy = lp->phys & 0x1f;
+
+ pr_debug("%s: In ioct(%-.6s, %#4.4x) %4.4x %4.4x %4.4x %4.4x.\n",
+ dev->name, rq->ifr_ifrn.ifrn_name, cmd,
+ data->phy_id, data->reg_num, data->val_in, data->val_out);
+
+ switch(cmd) {
+ case SIOCGMIIPHY: /* Get the address of the PHY in use. */
+ data->phy_id = phy;
+ case SIOCGMIIREG: /* Read the specified MII register. */
+ {
+ int saved_window;
+ unsigned long flags;
+
+ spin_lock_irqsave(&lp->window_lock, flags);
+ saved_window = inw(ioaddr + EL3_CMD) >> 13;
+ EL3WINDOW(4);
+ data->val_out = mdio_read(ioaddr, data->phy_id & 0x1f,
+ data->reg_num & 0x1f);
+ EL3WINDOW(saved_window);
+ spin_unlock_irqrestore(&lp->window_lock, flags);
+ return 0;
+ }
+ case SIOCSMIIREG: /* Write the specified MII register */
+ {
+ int saved_window;
+ unsigned long flags;
+
+ spin_lock_irqsave(&lp->window_lock, flags);
+ saved_window = inw(ioaddr + EL3_CMD) >> 13;
+ EL3WINDOW(4);
+ mdio_write(ioaddr, data->phy_id & 0x1f,
+ data->reg_num & 0x1f, data->val_in);
+ EL3WINDOW(saved_window);
+ spin_unlock_irqrestore(&lp->window_lock, flags);
+ return 0;
+ }
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+/* The Odie chip has a 64 bin multicast filter, but the bit layout is not
+ documented. Until it is we revert to receiving all multicast frames when
+ any multicast reception is desired.
+ Note: My other drivers emit a log message whenever promiscuous mode is
+ entered to help detect password sniffers. This is less desirable on
+ typical PC card machines, so we omit the message.
+ */
+
+static void set_rx_mode(struct net_device *dev)
+{
+ unsigned int ioaddr = dev->base_addr;
+
+ if (dev->flags & IFF_PROMISC)
+ outw(SetRxFilter | RxStation | RxMulticast | RxBroadcast | RxProm,
+ ioaddr + EL3_CMD);
+ else if (!netdev_mc_empty(dev) || (dev->flags & IFF_ALLMULTI))
+ outw(SetRxFilter|RxStation|RxMulticast|RxBroadcast, ioaddr + EL3_CMD);
+ else
+ outw(SetRxFilter | RxStation | RxBroadcast, ioaddr + EL3_CMD);
+}
+
+static void set_multicast_list(struct net_device *dev)
+{
+ struct el3_private *lp = netdev_priv(dev);
+ unsigned long flags;
+
+ spin_lock_irqsave(&lp->window_lock, flags);
+ set_rx_mode(dev);
+ spin_unlock_irqrestore(&lp->window_lock, flags);
+}
+
+static int el3_close(struct net_device *dev)
+{
+ unsigned int ioaddr = dev->base_addr;
+ struct el3_private *lp = netdev_priv(dev);
+ struct pcmcia_device *link = lp->p_dev;
+
+ dev_dbg(&link->dev, "%s: shutting down ethercard.\n", dev->name);
+
+ if (pcmcia_dev_present(link)) {
+ unsigned long flags;
+
+ /* Turn off statistics ASAP. We update lp->stats below. */
+ outw(StatsDisable, ioaddr + EL3_CMD);
+
+ /* Disable the receiver and transmitter. */
+ outw(RxDisable, ioaddr + EL3_CMD);
+ outw(TxDisable, ioaddr + EL3_CMD);
+
+ /* Note: Switching to window 0 may disable the IRQ. */
+ EL3WINDOW(0);
+ spin_lock_irqsave(&lp->window_lock, flags);
+ update_stats(dev);
+ spin_unlock_irqrestore(&lp->window_lock, flags);
+
+ /* force interrupts off */
+ outw(SetIntrEnb | 0x0000, ioaddr + EL3_CMD);
+ }
+
+ link->open--;
+ netif_stop_queue(dev);
+ del_timer_sync(&lp->media);
+
+ return 0;
+}
+
+static const struct pcmcia_device_id tc574_ids[] = {
+ PCMCIA_DEVICE_MANF_CARD(0x0101, 0x0574),
+ PCMCIA_MFC_DEVICE_CIS_MANF_CARD(0, 0x0101, 0x0556, "cis/3CCFEM556.cis"),
+ PCMCIA_DEVICE_NULL,
+};
+MODULE_DEVICE_TABLE(pcmcia, tc574_ids);
+
+static struct pcmcia_driver tc574_driver = {
+ .owner = THIS_MODULE,
+ .name = "3c574_cs",
+ .probe = tc574_probe,
+ .remove = tc574_detach,
+ .id_table = tc574_ids,
+ .suspend = tc574_suspend,
+ .resume = tc574_resume,
+};
+
+static int __init init_tc574(void)
+{
+ return pcmcia_register_driver(&tc574_driver);
+}
+
+static void __exit exit_tc574(void)
+{
+ pcmcia_unregister_driver(&tc574_driver);
+}
+
+module_init(init_tc574);
+module_exit(exit_tc574);
--- /dev/null
+/*======================================================================
+
+ A PCMCIA ethernet driver for the 3com 3c589 card.
+
+ Copyright (C) 1999 David A. Hinds -- dahinds@users.sourceforge.net
+
+ 3c589_cs.c 1.162 2001/10/13 00:08:50
+
+ The network driver code is based on Donald Becker's 3c589 code:
+
+ Written 1994 by Donald Becker.
+ Copyright 1993 United States Government as represented by the
+ Director, National Security Agency. This software may be used and
+ distributed according to the terms of the GNU General Public License,
+ incorporated herein by reference.
+ Donald Becker may be reached at becker@scyld.com
+
+ Updated for 2.5.x by Alan Cox <alan@lxorguk.ukuu.org.uk>
+
+======================================================================*/
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#define DRV_NAME "3c589_cs"
+#define DRV_VERSION "1.162-ac"
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/ptrace.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/timer.h>
+#include <linux/interrupt.h>
+#include <linux/in.h>
+#include <linux/delay.h>
+#include <linux/ethtool.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/if_arp.h>
+#include <linux/ioport.h>
+#include <linux/bitops.h>
+#include <linux/jiffies.h>
+
+#include <pcmcia/cistpl.h>
+#include <pcmcia/cisreg.h>
+#include <pcmcia/ciscode.h>
+#include <pcmcia/ds.h>
+
+#include <asm/uaccess.h>
+#include <asm/io.h>
+#include <asm/system.h>
+
+/* To minimize the size of the driver source I only define operating
+ constants if they are used several times. You'll need the manual
+ if you want to understand driver details. */
+/* Offsets from base I/O address. */
+#define EL3_DATA 0x00
+#define EL3_TIMER 0x0a
+#define EL3_CMD 0x0e
+#define EL3_STATUS 0x0e
+
+#define EEPROM_READ 0x0080
+#define EEPROM_BUSY 0x8000
+
+#define EL3WINDOW(win_num) outw(SelectWindow + (win_num), ioaddr + EL3_CMD)
+
+/* The top five bits written to EL3_CMD are a command, the lower
+ 11 bits are the parameter, if applicable. */
+enum c509cmd {
+ TotalReset = 0<<11,
+ SelectWindow = 1<<11,
+ StartCoax = 2<<11,
+ RxDisable = 3<<11,
+ RxEnable = 4<<11,
+ RxReset = 5<<11,
+ RxDiscard = 8<<11,
+ TxEnable = 9<<11,
+ TxDisable = 10<<11,
+ TxReset = 11<<11,
+ FakeIntr = 12<<11,
+ AckIntr = 13<<11,
+ SetIntrEnb = 14<<11,
+ SetStatusEnb = 15<<11,
+ SetRxFilter = 16<<11,
+ SetRxThreshold = 17<<11,
+ SetTxThreshold = 18<<11,
+ SetTxStart = 19<<11,
+ StatsEnable = 21<<11,
+ StatsDisable = 22<<11,
+ StopCoax = 23<<11
+};
+
+enum c509status {
+ IntLatch = 0x0001,
+ AdapterFailure = 0x0002,
+ TxComplete = 0x0004,
+ TxAvailable = 0x0008,
+ RxComplete = 0x0010,
+ RxEarly = 0x0020,
+ IntReq = 0x0040,
+ StatsFull = 0x0080,
+ CmdBusy = 0x1000
+};
+
+/* The SetRxFilter command accepts the following classes: */
+enum RxFilter {
+ RxStation = 1,
+ RxMulticast = 2,
+ RxBroadcast = 4,
+ RxProm = 8
+};
+
+/* Register window 1 offsets, the window used in normal operation. */
+#define TX_FIFO 0x00
+#define RX_FIFO 0x00
+#define RX_STATUS 0x08
+#define TX_STATUS 0x0B
+#define TX_FREE 0x0C /* Remaining free bytes in Tx buffer. */
+
+#define WN0_IRQ 0x08 /* Window 0: Set IRQ line in bits 12-15. */
+#define WN4_MEDIA 0x0A /* Window 4: Various transcvr/media bits. */
+#define MEDIA_TP 0x00C0 /* Enable link beat and jabber for 10baseT. */
+#define MEDIA_LED 0x0001 /* Enable link light on 3C589E cards. */
+
+/* Time in jiffies before concluding Tx hung */
+#define TX_TIMEOUT ((400*HZ)/1000)
+
+struct el3_private {
+ struct pcmcia_device *p_dev;
+ /* For transceiver monitoring */
+ struct timer_list media;
+ u16 media_status;
+ u16 fast_poll;
+ unsigned long last_irq;
+ spinlock_t lock;
+};
+
+static const char *if_names[] = { "auto", "10baseT", "10base2", "AUI" };
+
+/*====================================================================*/
+
+/* Module parameters */
+
+MODULE_AUTHOR("David Hinds <dahinds@users.sourceforge.net>");
+MODULE_DESCRIPTION("3Com 3c589 series PCMCIA ethernet driver");
+MODULE_LICENSE("GPL");
+
+#define INT_MODULE_PARM(n, v) static int n = v; module_param(n, int, 0)
+
+/* Special hook for setting if_port when module is loaded */
+INT_MODULE_PARM(if_port, 0);
+
+
+/*====================================================================*/
+
+static int tc589_config(struct pcmcia_device *link);
+static void tc589_release(struct pcmcia_device *link);
+
+static u16 read_eeprom(unsigned int ioaddr, int index);
+static void tc589_reset(struct net_device *dev);
+static void media_check(unsigned long arg);
+static int el3_config(struct net_device *dev, struct ifmap *map);
+static int el3_open(struct net_device *dev);
+static netdev_tx_t el3_start_xmit(struct sk_buff *skb,
+ struct net_device *dev);
+static irqreturn_t el3_interrupt(int irq, void *dev_id);
+static void update_stats(struct net_device *dev);
+static struct net_device_stats *el3_get_stats(struct net_device *dev);
+static int el3_rx(struct net_device *dev);
+static int el3_close(struct net_device *dev);
+static void el3_tx_timeout(struct net_device *dev);
+static void set_rx_mode(struct net_device *dev);
+static void set_multicast_list(struct net_device *dev);
+static const struct ethtool_ops netdev_ethtool_ops;
+
+static void tc589_detach(struct pcmcia_device *p_dev);
+
+static const struct net_device_ops el3_netdev_ops = {
+ .ndo_open = el3_open,
+ .ndo_stop = el3_close,
+ .ndo_start_xmit = el3_start_xmit,
+ .ndo_tx_timeout = el3_tx_timeout,
+ .ndo_set_config = el3_config,
+ .ndo_get_stats = el3_get_stats,
+ .ndo_set_multicast_list = set_multicast_list,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+};
+
+static int tc589_probe(struct pcmcia_device *link)
+{
+ struct el3_private *lp;
+ struct net_device *dev;
+
+ dev_dbg(&link->dev, "3c589_attach()\n");
+
+ /* Create new ethernet device */
+ dev = alloc_etherdev(sizeof(struct el3_private));
+ if (!dev)
+ return -ENOMEM;
+ lp = netdev_priv(dev);
+ link->priv = dev;
+ lp->p_dev = link;
+
+ spin_lock_init(&lp->lock);
+ link->resource[0]->end = 16;
+ link->resource[0]->flags |= IO_DATA_PATH_WIDTH_16;
+
+ link->config_flags |= CONF_ENABLE_IRQ;
+ link->config_index = 1;
+
+ dev->netdev_ops = &el3_netdev_ops;
+ dev->watchdog_timeo = TX_TIMEOUT;
+
+ SET_ETHTOOL_OPS(dev, &netdev_ethtool_ops);
+
+ return tc589_config(link);
+}
+
+static void tc589_detach(struct pcmcia_device *link)
+{
+ struct net_device *dev = link->priv;
+
+ dev_dbg(&link->dev, "3c589_detach\n");
+
+ unregister_netdev(dev);
+
+ tc589_release(link);
+
+ free_netdev(dev);
+} /* tc589_detach */
+
+static int tc589_config(struct pcmcia_device *link)
+{
+ struct net_device *dev = link->priv;
+ __be16 *phys_addr;
+ int ret, i, j, multi = 0, fifo;
+ unsigned int ioaddr;
+ static const char * const ram_split[] = {"5:3", "3:1", "1:1", "3:5"};
+ u8 *buf;
+ size_t len;
+
+ dev_dbg(&link->dev, "3c589_config\n");
+
+ phys_addr = (__be16 *)dev->dev_addr;
+ /* Is this a 3c562? */
+ if (link->manf_id != MANFID_3COM)
+ dev_info(&link->dev, "hmmm, is this really a 3Com card??\n");
+ multi = (link->card_id == PRODID_3COM_3C562);
+
+ link->io_lines = 16;
+
+ /* For the 3c562, the base address must be xx00-xx7f */
+ for (i = j = 0; j < 0x400; j += 0x10) {
+ if (multi && (j & 0x80)) continue;
+ link->resource[0]->start = j ^ 0x300;
+ i = pcmcia_request_io(link);
+ if (i == 0)
+ break;
+ }
+ if (i != 0)
+ goto failed;
+
+ ret = pcmcia_request_irq(link, el3_interrupt);
+ if (ret)
+ goto failed;
+
+ ret = pcmcia_enable_device(link);
+ if (ret)
+ goto failed;
+
+ dev->irq = link->irq;
+ dev->base_addr = link->resource[0]->start;
+ ioaddr = dev->base_addr;
+ EL3WINDOW(0);
+
+ /* The 3c589 has an extra EEPROM for configuration info, including
+ the hardware address. The 3c562 puts the address in the CIS. */
+ len = pcmcia_get_tuple(link, 0x88, &buf);
+ if (buf && len >= 6) {
+ for (i = 0; i < 3; i++)
+ phys_addr[i] = htons(le16_to_cpu(buf[i*2]));
+ kfree(buf);
+ } else {
+ kfree(buf); /* 0 < len < 6 */
+ for (i = 0; i < 3; i++)
+ phys_addr[i] = htons(read_eeprom(ioaddr, i));
+ if (phys_addr[0] == htons(0x6060)) {
+ dev_err(&link->dev, "IO port conflict at 0x%03lx-0x%03lx\n",
+ dev->base_addr, dev->base_addr+15);
+ goto failed;
+ }
+ }
+
+ /* The address and resource configuration register aren't loaded from
+ the EEPROM and *must* be set to 0 and IRQ3 for the PCMCIA version. */
+ outw(0x3f00, ioaddr + 8);
+ fifo = inl(ioaddr);
+
+ /* The if_port symbol can be set when the module is loaded */
+ if ((if_port >= 0) && (if_port <= 3))
+ dev->if_port = if_port;
+ else
+ dev_err(&link->dev, "invalid if_port requested\n");
+
+ SET_NETDEV_DEV(dev, &link->dev);
+
+ if (register_netdev(dev) != 0) {
+ dev_err(&link->dev, "register_netdev() failed\n");
+ goto failed;
+ }
+
+ netdev_info(dev, "3Com 3c%s, io %#3lx, irq %d, hw_addr %pM\n",
+ (multi ? "562" : "589"), dev->base_addr, dev->irq,
+ dev->dev_addr);
+ netdev_info(dev, " %dK FIFO split %s Rx:Tx, %s xcvr\n",
+ (fifo & 7) ? 32 : 8, ram_split[(fifo >> 16) & 3],
+ if_names[dev->if_port]);
+ return 0;
+
+failed:
+ tc589_release(link);
+ return -ENODEV;
+} /* tc589_config */
+
+static void tc589_release(struct pcmcia_device *link)
+{
+ pcmcia_disable_device(link);
+}
+
+static int tc589_suspend(struct pcmcia_device *link)
+{
+ struct net_device *dev = link->priv;
+
+ if (link->open)
+ netif_device_detach(dev);
+
+ return 0;
+}
+
+static int tc589_resume(struct pcmcia_device *link)
+{
+ struct net_device *dev = link->priv;
+
+ if (link->open) {
+ tc589_reset(dev);
+ netif_device_attach(dev);
+ }
+
+ return 0;
+}
+
+/*====================================================================*/
+
+/*
+ Use this for commands that may take time to finish
+*/
+static void tc589_wait_for_completion(struct net_device *dev, int cmd)
+{
+ int i = 100;
+ outw(cmd, dev->base_addr + EL3_CMD);
+ while (--i > 0)
+ if (!(inw(dev->base_addr + EL3_STATUS) & 0x1000)) break;
+ if (i == 0)
+ netdev_warn(dev, "command 0x%04x did not complete!\n", cmd);
+}
+
+/*
+ Read a word from the EEPROM using the regular EEPROM access register.
+ Assume that we are in register window zero.
+*/
+static u16 read_eeprom(unsigned int ioaddr, int index)
+{
+ int i;
+ outw(EEPROM_READ + index, ioaddr + 10);
+ /* Reading the eeprom takes 162 us */
+ for (i = 1620; i >= 0; i--)
+ if ((inw(ioaddr + 10) & EEPROM_BUSY) == 0)
+ break;
+ return inw(ioaddr + 12);
+}
+
+/*
+ Set transceiver type, perhaps to something other than what the user
+ specified in dev->if_port.
+*/
+static void tc589_set_xcvr(struct net_device *dev, int if_port)
+{
+ struct el3_private *lp = netdev_priv(dev);
+ unsigned int ioaddr = dev->base_addr;
+
+ EL3WINDOW(0);
+ switch (if_port) {
+ case 0: case 1: outw(0, ioaddr + 6); break;
+ case 2: outw(3<<14, ioaddr + 6); break;
+ case 3: outw(1<<14, ioaddr + 6); break;
+ }
+ /* On PCMCIA, this just turns on the LED */
+ outw((if_port == 2) ? StartCoax : StopCoax, ioaddr + EL3_CMD);
+ /* 10baseT interface, enable link beat and jabber check. */
+ EL3WINDOW(4);
+ outw(MEDIA_LED | ((if_port < 2) ? MEDIA_TP : 0), ioaddr + WN4_MEDIA);
+ EL3WINDOW(1);
+ if (if_port == 2)
+ lp->media_status = ((dev->if_port == 0) ? 0x8000 : 0x4000);
+ else
+ lp->media_status = ((dev->if_port == 0) ? 0x4010 : 0x8800);
+}
+
+static void dump_status(struct net_device *dev)
+{
+ unsigned int ioaddr = dev->base_addr;
+ EL3WINDOW(1);
+ netdev_info(dev, " irq status %04x, rx status %04x, tx status %02x tx free %04x\n",
+ inw(ioaddr+EL3_STATUS), inw(ioaddr+RX_STATUS),
+ inb(ioaddr+TX_STATUS), inw(ioaddr+TX_FREE));
+ EL3WINDOW(4);
+ netdev_info(dev, " diagnostics: fifo %04x net %04x ethernet %04x media %04x\n",
+ inw(ioaddr+0x04), inw(ioaddr+0x06), inw(ioaddr+0x08),
+ inw(ioaddr+0x0a));
+ EL3WINDOW(1);
+}
+
+/* Reset and restore all of the 3c589 registers. */
+static void tc589_reset(struct net_device *dev)
+{
+ unsigned int ioaddr = dev->base_addr;
+ int i;
+
+ EL3WINDOW(0);
+ outw(0x0001, ioaddr + 4); /* Activate board. */
+ outw(0x3f00, ioaddr + 8); /* Set the IRQ line. */
+
+ /* Set the station address in window 2. */
+ EL3WINDOW(2);
+ for (i = 0; i < 6; i++)
+ outb(dev->dev_addr[i], ioaddr + i);
+
+ tc589_set_xcvr(dev, dev->if_port);
+
+ /* Switch to the stats window, and clear all stats by reading. */
+ outw(StatsDisable, ioaddr + EL3_CMD);
+ EL3WINDOW(6);
+ for (i = 0; i < 9; i++)
+ inb(ioaddr+i);
+ inw(ioaddr + 10);
+ inw(ioaddr + 12);
+
+ /* Switch to register set 1 for normal use. */
+ EL3WINDOW(1);
+
+ set_rx_mode(dev);
+ outw(StatsEnable, ioaddr + EL3_CMD); /* Turn on statistics. */
+ outw(RxEnable, ioaddr + EL3_CMD); /* Enable the receiver. */
+ outw(TxEnable, ioaddr + EL3_CMD); /* Enable transmitter. */
+ /* Allow status bits to be seen. */
+ outw(SetStatusEnb | 0xff, ioaddr + EL3_CMD);
+ /* Ack all pending events, and set active indicator mask. */
+ outw(AckIntr | IntLatch | TxAvailable | RxEarly | IntReq,
+ ioaddr + EL3_CMD);
+ outw(SetIntrEnb | IntLatch | TxAvailable | RxComplete | StatsFull
+ | AdapterFailure, ioaddr + EL3_CMD);
+}
+
+static void netdev_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ strcpy(info->driver, DRV_NAME);
+ strcpy(info->version, DRV_VERSION);
+ sprintf(info->bus_info, "PCMCIA 0x%lx", dev->base_addr);
+}
+
+static const struct ethtool_ops netdev_ethtool_ops = {
+ .get_drvinfo = netdev_get_drvinfo,
+};
+
+static int el3_config(struct net_device *dev, struct ifmap *map)
+{
+ if ((map->port != (u_char)(-1)) && (map->port != dev->if_port)) {
+ if (map->port <= 3) {
+ dev->if_port = map->port;
+ netdev_info(dev, "switched to %s port\n", if_names[dev->if_port]);
+ tc589_set_xcvr(dev, dev->if_port);
+ } else
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int el3_open(struct net_device *dev)
+{
+ struct el3_private *lp = netdev_priv(dev);
+ struct pcmcia_device *link = lp->p_dev;
+
+ if (!pcmcia_dev_present(link))
+ return -ENODEV;
+
+ link->open++;
+ netif_start_queue(dev);
+
+ tc589_reset(dev);
+ init_timer(&lp->media);
+ lp->media.function = media_check;
+ lp->media.data = (unsigned long) dev;
+ lp->media.expires = jiffies + HZ;
+ add_timer(&lp->media);
+
+ dev_dbg(&link->dev, "%s: opened, status %4.4x.\n",
+ dev->name, inw(dev->base_addr + EL3_STATUS));
+
+ return 0;
+}
+
+static void el3_tx_timeout(struct net_device *dev)
+{
+ unsigned int ioaddr = dev->base_addr;
+
+ netdev_warn(dev, "Transmit timed out!\n");
+ dump_status(dev);
+ dev->stats.tx_errors++;
+ dev->trans_start = jiffies; /* prevent tx timeout */
+ /* Issue TX_RESET and TX_START commands. */
+ tc589_wait_for_completion(dev, TxReset);
+ outw(TxEnable, ioaddr + EL3_CMD);
+ netif_wake_queue(dev);
+}
+
+static void pop_tx_status(struct net_device *dev)
+{
+ unsigned int ioaddr = dev->base_addr;
+ int i;
+
+ /* Clear the Tx status stack. */
+ for (i = 32; i > 0; i--) {
+ u_char tx_status = inb(ioaddr + TX_STATUS);
+ if (!(tx_status & 0x84)) break;
+ /* reset transmitter on jabber error or underrun */
+ if (tx_status & 0x30)
+ tc589_wait_for_completion(dev, TxReset);
+ if (tx_status & 0x38) {
+ netdev_dbg(dev, "transmit error: status 0x%02x\n", tx_status);
+ outw(TxEnable, ioaddr + EL3_CMD);
+ dev->stats.tx_aborted_errors++;
+ }
+ outb(0x00, ioaddr + TX_STATUS); /* Pop the status stack. */
+ }
+}
+
+static netdev_tx_t el3_start_xmit(struct sk_buff *skb,
+ struct net_device *dev)
+{
+ unsigned int ioaddr = dev->base_addr;
+ struct el3_private *priv = netdev_priv(dev);
+ unsigned long flags;
+
+ netdev_dbg(dev, "el3_start_xmit(length = %ld) called, status %4.4x.\n",
+ (long)skb->len, inw(ioaddr + EL3_STATUS));
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ dev->stats.tx_bytes += skb->len;
+
+ /* Put out the doubleword header... */
+ outw(skb->len, ioaddr + TX_FIFO);
+ outw(0x00, ioaddr + TX_FIFO);
+ /* ... and the packet rounded to a doubleword. */
+ outsl(ioaddr + TX_FIFO, skb->data, (skb->len + 3) >> 2);
+
+ if (inw(ioaddr + TX_FREE) <= 1536) {
+ netif_stop_queue(dev);
+ /* Interrupt us when the FIFO has room for max-sized packet. */
+ outw(SetTxThreshold + 1536, ioaddr + EL3_CMD);
+ }
+
+ pop_tx_status(dev);
+ spin_unlock_irqrestore(&priv->lock, flags);
+ dev_kfree_skb(skb);
+
+ return NETDEV_TX_OK;
+}
+
+/* The EL3 interrupt handler. */
+static irqreturn_t el3_interrupt(int irq, void *dev_id)
+{
+ struct net_device *dev = (struct net_device *) dev_id;
+ struct el3_private *lp = netdev_priv(dev);
+ unsigned int ioaddr;
+ __u16 status;
+ int i = 0, handled = 1;
+
+ if (!netif_device_present(dev))
+ return IRQ_NONE;
+
+ ioaddr = dev->base_addr;
+
+ netdev_dbg(dev, "interrupt, status %4.4x.\n", inw(ioaddr + EL3_STATUS));
+
+ spin_lock(&lp->lock);
+ while ((status = inw(ioaddr + EL3_STATUS)) &
+ (IntLatch | RxComplete | StatsFull)) {
+ if ((status & 0xe000) != 0x2000) {
+ netdev_dbg(dev, "interrupt from dead card\n");
+ handled = 0;
+ break;
+ }
+ if (status & RxComplete)
+ el3_rx(dev);
+ if (status & TxAvailable) {
+ netdev_dbg(dev, " TX room bit was handled.\n");
+ /* There's room in the FIFO for a full-sized packet. */
+ outw(AckIntr | TxAvailable, ioaddr + EL3_CMD);
+ netif_wake_queue(dev);
+ }
+ if (status & TxComplete)
+ pop_tx_status(dev);
+ if (status & (AdapterFailure | RxEarly | StatsFull)) {
+ /* Handle all uncommon interrupts. */
+ if (status & StatsFull) /* Empty statistics. */
+ update_stats(dev);
+ if (status & RxEarly) { /* Rx early is unused. */
+ el3_rx(dev);
+ outw(AckIntr | RxEarly, ioaddr + EL3_CMD);
+ }
+ if (status & AdapterFailure) {
+ u16 fifo_diag;
+ EL3WINDOW(4);
+ fifo_diag = inw(ioaddr + 4);
+ EL3WINDOW(1);
+ netdev_warn(dev, "adapter failure, FIFO diagnostic register %04x.\n",
+ fifo_diag);
+ if (fifo_diag & 0x0400) {
+ /* Tx overrun */
+ tc589_wait_for_completion(dev, TxReset);
+ outw(TxEnable, ioaddr + EL3_CMD);
+ }
+ if (fifo_diag & 0x2000) {
+ /* Rx underrun */
+ tc589_wait_for_completion(dev, RxReset);
+ set_rx_mode(dev);
+ outw(RxEnable, ioaddr + EL3_CMD);
+ }
+ outw(AckIntr | AdapterFailure, ioaddr + EL3_CMD);
+ }
+ }
+ if (++i > 10) {
+ netdev_err(dev, "infinite loop in interrupt, status %4.4x.\n",
+ status);
+ /* Clear all interrupts */
+ outw(AckIntr | 0xFF, ioaddr + EL3_CMD);
+ break;
+ }
+ /* Acknowledge the IRQ. */
+ outw(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
+ }
+ lp->last_irq = jiffies;
+ spin_unlock(&lp->lock);
+ netdev_dbg(dev, "exiting interrupt, status %4.4x.\n",
+ inw(ioaddr + EL3_STATUS));
+ return IRQ_RETVAL(handled);
+}
+
+static void media_check(unsigned long arg)
+{
+ struct net_device *dev = (struct net_device *)(arg);
+ struct el3_private *lp = netdev_priv(dev);
+ unsigned int ioaddr = dev->base_addr;
+ u16 media, errs;
+ unsigned long flags;
+
+ if (!netif_device_present(dev)) goto reschedule;
+
+ /* Check for pending interrupt with expired latency timer: with
+ this, we can limp along even if the interrupt is blocked */
+ if ((inw(ioaddr + EL3_STATUS) & IntLatch) &&
+ (inb(ioaddr + EL3_TIMER) == 0xff)) {
+ if (!lp->fast_poll)
+ netdev_warn(dev, "interrupt(s) dropped!\n");
+
+ local_irq_save(flags);
+ el3_interrupt(dev->irq, dev);
+ local_irq_restore(flags);
+
+ lp->fast_poll = HZ;
+ }
+ if (lp->fast_poll) {
+ lp->fast_poll--;
+ lp->media.expires = jiffies + HZ/100;
+ add_timer(&lp->media);
+ return;
+ }
+
+ /* lp->lock guards the EL3 window. Window should always be 1 except
+ when the lock is held */
+ spin_lock_irqsave(&lp->lock, flags);
+ EL3WINDOW(4);
+ media = inw(ioaddr+WN4_MEDIA) & 0xc810;
+
+ /* Ignore collisions unless we've had no irq's recently */
+ if (time_before(jiffies, lp->last_irq + HZ)) {
+ media &= ~0x0010;
+ } else {
+ /* Try harder to detect carrier errors */
+ EL3WINDOW(6);
+ outw(StatsDisable, ioaddr + EL3_CMD);
+ errs = inb(ioaddr + 0);
+ outw(StatsEnable, ioaddr + EL3_CMD);
+ dev->stats.tx_carrier_errors += errs;
+ if (errs || (lp->media_status & 0x0010)) media |= 0x0010;
+ }
+
+ if (media != lp->media_status) {
+ if ((media & lp->media_status & 0x8000) &&
+ ((lp->media_status ^ media) & 0x0800))
+ netdev_info(dev, "%s link beat\n",
+ (lp->media_status & 0x0800 ? "lost" : "found"));
+ else if ((media & lp->media_status & 0x4000) &&
+ ((lp->media_status ^ media) & 0x0010))
+ netdev_info(dev, "coax cable %s\n",
+ (lp->media_status & 0x0010 ? "ok" : "problem"));
+ if (dev->if_port == 0) {
+ if (media & 0x8000) {
+ if (media & 0x0800)
+ netdev_info(dev, "flipped to 10baseT\n");
+ else
+ tc589_set_xcvr(dev, 2);
+ } else if (media & 0x4000) {
+ if (media & 0x0010)
+ tc589_set_xcvr(dev, 1);
+ else
+ netdev_info(dev, "flipped to 10base2\n");
+ }
+ }
+ lp->media_status = media;
+ }
+
+ EL3WINDOW(1);
+ spin_unlock_irqrestore(&lp->lock, flags);
+
+reschedule:
+ lp->media.expires = jiffies + HZ;
+ add_timer(&lp->media);
+}
+
+static struct net_device_stats *el3_get_stats(struct net_device *dev)
+{
+ struct el3_private *lp = netdev_priv(dev);
+ unsigned long flags;
+ struct pcmcia_device *link = lp->p_dev;
+
+ if (pcmcia_dev_present(link)) {
+ spin_lock_irqsave(&lp->lock, flags);
+ update_stats(dev);
+ spin_unlock_irqrestore(&lp->lock, flags);
+ }
+ return &dev->stats;
+}
+
+/*
+ Update statistics. We change to register window 6, so this should be run
+ single-threaded if the device is active. This is expected to be a rare
+ operation, and it's simpler for the rest of the driver to assume that
+ window 1 is always valid rather than use a special window-state variable.
+
+ Caller must hold the lock for this
+*/
+static void update_stats(struct net_device *dev)
+{
+ unsigned int ioaddr = dev->base_addr;
+
+ netdev_dbg(dev, "updating the statistics.\n");
+ /* Turn off statistics updates while reading. */
+ outw(StatsDisable, ioaddr + EL3_CMD);
+ /* Switch to the stats window, and read everything. */
+ EL3WINDOW(6);
+ dev->stats.tx_carrier_errors += inb(ioaddr + 0);
+ dev->stats.tx_heartbeat_errors += inb(ioaddr + 1);
+ /* Multiple collisions. */ inb(ioaddr + 2);
+ dev->stats.collisions += inb(ioaddr + 3);
+ dev->stats.tx_window_errors += inb(ioaddr + 4);
+ dev->stats.rx_fifo_errors += inb(ioaddr + 5);
+ dev->stats.tx_packets += inb(ioaddr + 6);
+ /* Rx packets */ inb(ioaddr + 7);
+ /* Tx deferrals */ inb(ioaddr + 8);
+ /* Rx octets */ inw(ioaddr + 10);
+ /* Tx octets */ inw(ioaddr + 12);
+
+ /* Back to window 1, and turn statistics back on. */
+ EL3WINDOW(1);
+ outw(StatsEnable, ioaddr + EL3_CMD);
+}
+
+static int el3_rx(struct net_device *dev)
+{
+ unsigned int ioaddr = dev->base_addr;
+ int worklimit = 32;
+ short rx_status;
+
+ netdev_dbg(dev, "in rx_packet(), status %4.4x, rx_status %4.4x.\n",
+ inw(ioaddr+EL3_STATUS), inw(ioaddr+RX_STATUS));
+ while (!((rx_status = inw(ioaddr + RX_STATUS)) & 0x8000) &&
+ worklimit > 0) {
+ worklimit--;
+ if (rx_status & 0x4000) { /* Error, update stats. */
+ short error = rx_status & 0x3800;
+ dev->stats.rx_errors++;
+ switch (error) {
+ case 0x0000: dev->stats.rx_over_errors++; break;
+ case 0x0800: dev->stats.rx_length_errors++; break;
+ case 0x1000: dev->stats.rx_frame_errors++; break;
+ case 0x1800: dev->stats.rx_length_errors++; break;
+ case 0x2000: dev->stats.rx_frame_errors++; break;
+ case 0x2800: dev->stats.rx_crc_errors++; break;
+ }
+ } else {
+ short pkt_len = rx_status & 0x7ff;
+ struct sk_buff *skb;
+
+ skb = dev_alloc_skb(pkt_len+5);
+
+ netdev_dbg(dev, " Receiving packet size %d status %4.4x.\n",
+ pkt_len, rx_status);
+ if (skb != NULL) {
+ skb_reserve(skb, 2);
+ insl(ioaddr+RX_FIFO, skb_put(skb, pkt_len),
+ (pkt_len+3)>>2);
+ skb->protocol = eth_type_trans(skb, dev);
+ netif_rx(skb);
+ dev->stats.rx_packets++;
+ dev->stats.rx_bytes += pkt_len;
+ } else {
+ netdev_dbg(dev, "couldn't allocate a sk_buff of size %d.\n",
+ pkt_len);
+ dev->stats.rx_dropped++;
+ }
+ }
+ /* Pop the top of the Rx FIFO */
+ tc589_wait_for_completion(dev, RxDiscard);
+ }
+ if (worklimit == 0)
+ netdev_warn(dev, "too much work in el3_rx!\n");
+ return 0;
+}
+
+static void set_rx_mode(struct net_device *dev)
+{
+ unsigned int ioaddr = dev->base_addr;
+ u16 opts = SetRxFilter | RxStation | RxBroadcast;
+
+ if (dev->flags & IFF_PROMISC)
+ opts |= RxMulticast | RxProm;
+ else if (!netdev_mc_empty(dev) || (dev->flags & IFF_ALLMULTI))
+ opts |= RxMulticast;
+ outw(opts, ioaddr + EL3_CMD);
+}
+
+static void set_multicast_list(struct net_device *dev)
+{
+ struct el3_private *priv = netdev_priv(dev);
+ unsigned long flags;
+
+ spin_lock_irqsave(&priv->lock, flags);
+ set_rx_mode(dev);
+ spin_unlock_irqrestore(&priv->lock, flags);
+}
+
+static int el3_close(struct net_device *dev)
+{
+ struct el3_private *lp = netdev_priv(dev);
+ struct pcmcia_device *link = lp->p_dev;
+ unsigned int ioaddr = dev->base_addr;
+
+ dev_dbg(&link->dev, "%s: shutting down ethercard.\n", dev->name);
+
+ if (pcmcia_dev_present(link)) {
+ /* Turn off statistics ASAP. We update dev->stats below. */
+ outw(StatsDisable, ioaddr + EL3_CMD);
+
+ /* Disable the receiver and transmitter. */
+ outw(RxDisable, ioaddr + EL3_CMD);
+ outw(TxDisable, ioaddr + EL3_CMD);
+
+ if (dev->if_port == 2)
+ /* Turn off thinnet power. Green! */
+ outw(StopCoax, ioaddr + EL3_CMD);
+ else if (dev->if_port == 1) {
+ /* Disable link beat and jabber */
+ EL3WINDOW(4);
+ outw(0, ioaddr + WN4_MEDIA);
+ }
+
+ /* Switching back to window 0 disables the IRQ. */
+ EL3WINDOW(0);
+ /* But we explicitly zero the IRQ line select anyway. */
+ outw(0x0f00, ioaddr + WN0_IRQ);
+
+ /* Check if the card still exists */
+ if ((inw(ioaddr+EL3_STATUS) & 0xe000) == 0x2000)
+ update_stats(dev);
+ }
+
+ link->open--;
+ netif_stop_queue(dev);
+ del_timer_sync(&lp->media);
+
+ return 0;
+}
+
+static const struct pcmcia_device_id tc589_ids[] = {
+ PCMCIA_MFC_DEVICE_MANF_CARD(0, 0x0101, 0x0562),
+ PCMCIA_MFC_DEVICE_PROD_ID1(0, "Motorola MARQUIS", 0xf03e4e77),
+ PCMCIA_DEVICE_MANF_CARD(0x0101, 0x0589),
+ PCMCIA_DEVICE_PROD_ID12("Farallon", "ENet", 0x58d93fc4, 0x992c2202),
+ PCMCIA_MFC_DEVICE_CIS_MANF_CARD(0, 0x0101, 0x0035, "cis/3CXEM556.cis"),
+ PCMCIA_MFC_DEVICE_CIS_MANF_CARD(0, 0x0101, 0x003d, "cis/3CXEM556.cis"),
+ PCMCIA_DEVICE_NULL,
+};
+MODULE_DEVICE_TABLE(pcmcia, tc589_ids);
+
+static struct pcmcia_driver tc589_driver = {
+ .owner = THIS_MODULE,
+ .name = "3c589_cs",
+ .probe = tc589_probe,
+ .remove = tc589_detach,
+ .id_table = tc589_ids,
+ .suspend = tc589_suspend,
+ .resume = tc589_resume,
+};
+
+static int __init init_tc589(void)
+{
+ return pcmcia_register_driver(&tc589_driver);
+}
+
+static void __exit exit_tc589(void)
+{
+ pcmcia_unregister_driver(&tc589_driver);
+}
+
+module_init(init_tc589);
+module_exit(exit_tc589);
--- /dev/null
+/* EtherLinkXL.c: A 3Com EtherLink PCI III/XL ethernet driver for linux. */
+/*
+ Written 1996-1999 by Donald Becker.
+
+ This software may be used and distributed according to the terms
+ of the GNU General Public License, incorporated herein by reference.
+
+ This driver is for the 3Com "Vortex" and "Boomerang" series ethercards.
+ Members of the series include Fast EtherLink 3c590/3c592/3c595/3c597
+ and the EtherLink XL 3c900 and 3c905 cards.
+
+ Problem reports and questions should be directed to
+ vortex@scyld.com
+
+ The author may be reached as becker@scyld.com, or C/O
+ Scyld Computing Corporation
+ 410 Severn Ave., Suite 210
+ Annapolis MD 21403
+
+*/
+
+/*
+ * FIXME: This driver _could_ support MTU changing, but doesn't. See Don's hamachi.c implementation
+ * as well as other drivers
+ *
+ * NOTE: If you make 'vortex_debug' a constant (#define vortex_debug 0) the driver shrinks by 2k
+ * due to dead code elimination. There will be some performance benefits from this due to
+ * elimination of all the tests and reduced cache footprint.
+ */
+
+
+#define DRV_NAME "3c59x"
+
+
+
+/* A few values that may be tweaked. */
+/* Keep the ring sizes a power of two for efficiency. */
+#define TX_RING_SIZE 16
+#define RX_RING_SIZE 32
+#define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer.*/
+
+/* "Knobs" that adjust features and parameters. */
+/* Set the copy breakpoint for the copy-only-tiny-frames scheme.
+ Setting to > 1512 effectively disables this feature. */
+#ifndef __arm__
+static int rx_copybreak = 200;
+#else
+/* ARM systems perform better by disregarding the bus-master
+ transfer capability of these cards. -- rmk */
+static int rx_copybreak = 1513;
+#endif
+/* Allow setting MTU to a larger size, bypassing the normal ethernet setup. */
+static const int mtu = 1500;
+/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
+static int max_interrupt_work = 32;
+/* Tx timeout interval (millisecs) */
+static int watchdog = 5000;
+
+/* Allow aggregation of Tx interrupts. Saves CPU load at the cost
+ * of possible Tx stalls if the system is blocking interrupts
+ * somewhere else. Undefine this to disable.
+ */
+#define tx_interrupt_mitigation 1
+
+/* Put out somewhat more debugging messages. (0: no msg, 1 minimal .. 6). */
+#define vortex_debug debug
+#ifdef VORTEX_DEBUG
+static int vortex_debug = VORTEX_DEBUG;
+#else
+static int vortex_debug = 1;
+#endif
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/timer.h>
+#include <linux/errno.h>
+#include <linux/in.h>
+#include <linux/ioport.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/mii.h>
+#include <linux/init.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/ethtool.h>
+#include <linux/highmem.h>
+#include <linux/eisa.h>
+#include <linux/bitops.h>
+#include <linux/jiffies.h>
+#include <linux/gfp.h>
+#include <asm/irq.h> /* For nr_irqs only. */
+#include <asm/io.h>
+#include <asm/uaccess.h>
+
+/* Kernel compatibility defines, some common to David Hinds' PCMCIA package.
+ This is only in the support-all-kernels source code. */
+
+#define RUN_AT(x) (jiffies + (x))
+
+#include <linux/delay.h>
+
+
+static const char version[] __devinitconst =
+ DRV_NAME ": Donald Becker and others.\n";
+
+MODULE_AUTHOR("Donald Becker <becker@scyld.com>");
+MODULE_DESCRIPTION("3Com 3c59x/3c9xx ethernet driver ");
+MODULE_LICENSE("GPL");
+
+
+/* Operational parameter that usually are not changed. */
+
+/* The Vortex size is twice that of the original EtherLinkIII series: the
+ runtime register window, window 1, is now always mapped in.
+ The Boomerang size is twice as large as the Vortex -- it has additional
+ bus master control registers. */
+#define VORTEX_TOTAL_SIZE 0x20
+#define BOOMERANG_TOTAL_SIZE 0x40
+
+/* Set iff a MII transceiver on any interface requires mdio preamble.
+ This only set with the original DP83840 on older 3c905 boards, so the extra
+ code size of a per-interface flag is not worthwhile. */
+static char mii_preamble_required;
+
+#define PFX DRV_NAME ": "
+
+
+
+/*
+ Theory of Operation
+
+I. Board Compatibility
+
+This device driver is designed for the 3Com FastEtherLink and FastEtherLink
+XL, 3Com's PCI to 10/100baseT adapters. It also works with the 10Mbs
+versions of the FastEtherLink cards. The supported product IDs are
+ 3c590, 3c592, 3c595, 3c597, 3c900, 3c905
+
+The related ISA 3c515 is supported with a separate driver, 3c515.c, included
+with the kernel source or available from
+ cesdis.gsfc.nasa.gov:/pub/linux/drivers/3c515.html
+
+II. Board-specific settings
+
+PCI bus devices are configured by the system at boot time, so no jumpers
+need to be set on the board. The system BIOS should be set to assign the
+PCI INTA signal to an otherwise unused system IRQ line.
+
+The EEPROM settings for media type and forced-full-duplex are observed.
+The EEPROM media type should be left at the default "autoselect" unless using
+10base2 or AUI connections which cannot be reliably detected.
+
+III. Driver operation
+
+The 3c59x series use an interface that's very similar to the previous 3c5x9
+series. The primary interface is two programmed-I/O FIFOs, with an
+alternate single-contiguous-region bus-master transfer (see next).
+
+The 3c900 "Boomerang" series uses a full-bus-master interface with separate
+lists of transmit and receive descriptors, similar to the AMD LANCE/PCnet,
+DEC Tulip and Intel Speedo3. The first chip version retains a compatible
+programmed-I/O interface that has been removed in 'B' and subsequent board
+revisions.
+
+One extension that is advertised in a very large font is that the adapters
+are capable of being bus masters. On the Vortex chip this capability was
+only for a single contiguous region making it far less useful than the full
+bus master capability. There is a significant performance impact of taking
+an extra interrupt or polling for the completion of each transfer, as well
+as difficulty sharing the single transfer engine between the transmit and
+receive threads. Using DMA transfers is a win only with large blocks or
+with the flawed versions of the Intel Orion motherboard PCI controller.
+
+The Boomerang chip's full-bus-master interface is useful, and has the
+currently-unused advantages over other similar chips that queued transmit
+packets may be reordered and receive buffer groups are associated with a
+single frame.
+
+With full-bus-master support, this driver uses a "RX_COPYBREAK" scheme.
+Rather than a fixed intermediate receive buffer, this scheme allocates
+full-sized skbuffs as receive buffers. The value RX_COPYBREAK is used as
+the copying breakpoint: it is chosen to trade-off the memory wasted by
+passing the full-sized skbuff to the queue layer for all frames vs. the
+copying cost of copying a frame to a correctly-sized skbuff.
+
+IIIC. Synchronization
+The driver runs as two independent, single-threaded flows of control. One
+is the send-packet routine, which enforces single-threaded use by the
+dev->tbusy flag. The other thread is the interrupt handler, which is single
+threaded by the hardware and other software.
+
+IV. Notes
+
+Thanks to Cameron Spitzer and Terry Murphy of 3Com for providing development
+3c590, 3c595, and 3c900 boards.
+The name "Vortex" is the internal 3Com project name for the PCI ASIC, and
+the EISA version is called "Demon". According to Terry these names come
+from rides at the local amusement park.
+
+The new chips support both ethernet (1.5K) and FDDI (4.5K) packet sizes!
+This driver only supports ethernet packets because of the skbuff allocation
+limit of 4K.
+*/
+
+/* This table drives the PCI probe routines. It's mostly boilerplate in all
+ of the drivers, and will likely be provided by some future kernel.
+*/
+enum pci_flags_bit {
+ PCI_USES_MASTER=4,
+};
+
+enum { IS_VORTEX=1, IS_BOOMERANG=2, IS_CYCLONE=4, IS_TORNADO=8,
+ EEPROM_8BIT=0x10, /* AKPM: Uses 0x230 as the base bitmaps for EEPROM reads */
+ HAS_PWR_CTRL=0x20, HAS_MII=0x40, HAS_NWAY=0x80, HAS_CB_FNS=0x100,
+ INVERT_MII_PWR=0x200, INVERT_LED_PWR=0x400, MAX_COLLISION_RESET=0x800,
+ EEPROM_OFFSET=0x1000, HAS_HWCKSM=0x2000, WNO_XCVR_PWR=0x4000,
+ EXTRA_PREAMBLE=0x8000, EEPROM_RESET=0x10000, };
+
+enum vortex_chips {
+ CH_3C590 = 0,
+ CH_3C592,
+ CH_3C597,
+ CH_3C595_1,
+ CH_3C595_2,
+
+ CH_3C595_3,
+ CH_3C900_1,
+ CH_3C900_2,
+ CH_3C900_3,
+ CH_3C900_4,
+
+ CH_3C900_5,
+ CH_3C900B_FL,
+ CH_3C905_1,
+ CH_3C905_2,
+ CH_3C905B_TX,
+ CH_3C905B_1,
+
+ CH_3C905B_2,
+ CH_3C905B_FX,
+ CH_3C905C,
+ CH_3C9202,
+ CH_3C980,
+ CH_3C9805,
+
+ CH_3CSOHO100_TX,
+ CH_3C555,
+ CH_3C556,
+ CH_3C556B,
+ CH_3C575,
+
+ CH_3C575_1,
+ CH_3CCFE575,
+ CH_3CCFE575CT,
+ CH_3CCFE656,
+ CH_3CCFEM656,
+
+ CH_3CCFEM656_1,
+ CH_3C450,
+ CH_3C920,
+ CH_3C982A,
+ CH_3C982B,
+
+ CH_905BT4,
+ CH_920B_EMB_WNM,
+};
+
+
+/* note: this array directly indexed by above enums, and MUST
+ * be kept in sync with both the enums above, and the PCI device
+ * table below
+ */
+static struct vortex_chip_info {
+ const char *name;
+ int flags;
+ int drv_flags;
+ int io_size;
+} vortex_info_tbl[] __devinitdata = {
+ {"3c590 Vortex 10Mbps",
+ PCI_USES_MASTER, IS_VORTEX, 32, },
+ {"3c592 EISA 10Mbps Demon/Vortex", /* AKPM: from Don's 3c59x_cb.c 0.49H */
+ PCI_USES_MASTER, IS_VORTEX, 32, },
+ {"3c597 EISA Fast Demon/Vortex", /* AKPM: from Don's 3c59x_cb.c 0.49H */
+ PCI_USES_MASTER, IS_VORTEX, 32, },
+ {"3c595 Vortex 100baseTx",
+ PCI_USES_MASTER, IS_VORTEX, 32, },
+ {"3c595 Vortex 100baseT4",
+ PCI_USES_MASTER, IS_VORTEX, 32, },
+
+ {"3c595 Vortex 100base-MII",
+ PCI_USES_MASTER, IS_VORTEX, 32, },
+ {"3c900 Boomerang 10baseT",
+ PCI_USES_MASTER, IS_BOOMERANG|EEPROM_RESET, 64, },
+ {"3c900 Boomerang 10Mbps Combo",
+ PCI_USES_MASTER, IS_BOOMERANG|EEPROM_RESET, 64, },
+ {"3c900 Cyclone 10Mbps TPO", /* AKPM: from Don's 0.99M */
+ PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
+ {"3c900 Cyclone 10Mbps Combo",
+ PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
+
+ {"3c900 Cyclone 10Mbps TPC", /* AKPM: from Don's 0.99M */
+ PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
+ {"3c900B-FL Cyclone 10base-FL",
+ PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
+ {"3c905 Boomerang 100baseTx",
+ PCI_USES_MASTER, IS_BOOMERANG|HAS_MII|EEPROM_RESET, 64, },
+ {"3c905 Boomerang 100baseT4",
+ PCI_USES_MASTER, IS_BOOMERANG|HAS_MII|EEPROM_RESET, 64, },
+ {"3C905B-TX Fast Etherlink XL PCI",
+ PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
+ {"3c905B Cyclone 100baseTx",
+ PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
+
+ {"3c905B Cyclone 10/100/BNC",
+ PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM, 128, },
+ {"3c905B-FX Cyclone 100baseFx",
+ PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
+ {"3c905C Tornado",
+ PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
+ {"3c920B-EMB-WNM (ATI Radeon 9100 IGP)",
+ PCI_USES_MASTER, IS_TORNADO|HAS_MII|HAS_HWCKSM, 128, },
+ {"3c980 Cyclone",
+ PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
+
+ {"3c980C Python-T",
+ PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM, 128, },
+ {"3cSOHO100-TX Hurricane",
+ PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
+ {"3c555 Laptop Hurricane",
+ PCI_USES_MASTER, IS_CYCLONE|EEPROM_8BIT|HAS_HWCKSM, 128, },
+ {"3c556 Laptop Tornado",
+ PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|EEPROM_8BIT|HAS_CB_FNS|INVERT_MII_PWR|
+ HAS_HWCKSM, 128, },
+ {"3c556B Laptop Hurricane",
+ PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|EEPROM_OFFSET|HAS_CB_FNS|INVERT_MII_PWR|
+ WNO_XCVR_PWR|HAS_HWCKSM, 128, },
+
+ {"3c575 [Megahertz] 10/100 LAN CardBus",
+ PCI_USES_MASTER, IS_BOOMERANG|HAS_MII|EEPROM_8BIT, 128, },
+ {"3c575 Boomerang CardBus",
+ PCI_USES_MASTER, IS_BOOMERANG|HAS_MII|EEPROM_8BIT, 128, },
+ {"3CCFE575BT Cyclone CardBus",
+ PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|
+ INVERT_LED_PWR|HAS_HWCKSM, 128, },
+ {"3CCFE575CT Tornado CardBus",
+ PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|
+ MAX_COLLISION_RESET|HAS_HWCKSM, 128, },
+ {"3CCFE656 Cyclone CardBus",
+ PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|
+ INVERT_LED_PWR|HAS_HWCKSM, 128, },
+
+ {"3CCFEM656B Cyclone+Winmodem CardBus",
+ PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|
+ INVERT_LED_PWR|HAS_HWCKSM, 128, },
+ {"3CXFEM656C Tornado+Winmodem CardBus", /* From pcmcia-cs-3.1.5 */
+ PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|
+ MAX_COLLISION_RESET|HAS_HWCKSM, 128, },
+ {"3c450 HomePNA Tornado", /* AKPM: from Don's 0.99Q */
+ PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_HWCKSM, 128, },
+ {"3c920 Tornado",
+ PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_HWCKSM, 128, },
+ {"3c982 Hydra Dual Port A",
+ PCI_USES_MASTER, IS_TORNADO|HAS_HWCKSM|HAS_NWAY, 128, },
+
+ {"3c982 Hydra Dual Port B",
+ PCI_USES_MASTER, IS_TORNADO|HAS_HWCKSM|HAS_NWAY, 128, },
+ {"3c905B-T4",
+ PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
+ {"3c920B-EMB-WNM Tornado",
+ PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_HWCKSM, 128, },
+
+ {NULL,}, /* NULL terminated list. */
+};
+
+
+static DEFINE_PCI_DEVICE_TABLE(vortex_pci_tbl) = {
+ { 0x10B7, 0x5900, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C590 },
+ { 0x10B7, 0x5920, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C592 },
+ { 0x10B7, 0x5970, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C597 },
+ { 0x10B7, 0x5950, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C595_1 },
+ { 0x10B7, 0x5951, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C595_2 },
+
+ { 0x10B7, 0x5952, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C595_3 },
+ { 0x10B7, 0x9000, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_1 },
+ { 0x10B7, 0x9001, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_2 },
+ { 0x10B7, 0x9004, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_3 },
+ { 0x10B7, 0x9005, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_4 },
+
+ { 0x10B7, 0x9006, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_5 },
+ { 0x10B7, 0x900A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900B_FL },
+ { 0x10B7, 0x9050, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905_1 },
+ { 0x10B7, 0x9051, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905_2 },
+ { 0x10B7, 0x9054, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_TX },
+ { 0x10B7, 0x9055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_1 },
+
+ { 0x10B7, 0x9058, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_2 },
+ { 0x10B7, 0x905A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_FX },
+ { 0x10B7, 0x9200, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905C },
+ { 0x10B7, 0x9202, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C9202 },
+ { 0x10B7, 0x9800, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C980 },
+ { 0x10B7, 0x9805, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C9805 },
+
+ { 0x10B7, 0x7646, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CSOHO100_TX },
+ { 0x10B7, 0x5055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C555 },
+ { 0x10B7, 0x6055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C556 },
+ { 0x10B7, 0x6056, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C556B },
+ { 0x10B7, 0x5b57, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C575 },
+
+ { 0x10B7, 0x5057, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C575_1 },
+ { 0x10B7, 0x5157, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFE575 },
+ { 0x10B7, 0x5257, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFE575CT },
+ { 0x10B7, 0x6560, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFE656 },
+ { 0x10B7, 0x6562, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFEM656 },
+
+ { 0x10B7, 0x6564, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFEM656_1 },
+ { 0x10B7, 0x4500, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C450 },
+ { 0x10B7, 0x9201, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C920 },
+ { 0x10B7, 0x1201, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C982A },
+ { 0x10B7, 0x1202, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C982B },
+
+ { 0x10B7, 0x9056, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_905BT4 },
+ { 0x10B7, 0x9210, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_920B_EMB_WNM },
+
+ {0,} /* 0 terminated list. */
+};
+MODULE_DEVICE_TABLE(pci, vortex_pci_tbl);
+
+
+/* Operational definitions.
+ These are not used by other compilation units and thus are not
+ exported in a ".h" file.
+
+ First the windows. There are eight register windows, with the command
+ and status registers available in each.
+ */
+#define EL3_CMD 0x0e
+#define EL3_STATUS 0x0e
+
+/* The top five bits written to EL3_CMD are a command, the lower
+ 11 bits are the parameter, if applicable.
+ Note that 11 parameters bits was fine for ethernet, but the new chip
+ can handle FDDI length frames (~4500 octets) and now parameters count
+ 32-bit 'Dwords' rather than octets. */
+
+enum vortex_cmd {
+ TotalReset = 0<<11, SelectWindow = 1<<11, StartCoax = 2<<11,
+ RxDisable = 3<<11, RxEnable = 4<<11, RxReset = 5<<11,
+ UpStall = 6<<11, UpUnstall = (6<<11)+1,
+ DownStall = (6<<11)+2, DownUnstall = (6<<11)+3,
+ RxDiscard = 8<<11, TxEnable = 9<<11, TxDisable = 10<<11, TxReset = 11<<11,
+ FakeIntr = 12<<11, AckIntr = 13<<11, SetIntrEnb = 14<<11,
+ SetStatusEnb = 15<<11, SetRxFilter = 16<<11, SetRxThreshold = 17<<11,
+ SetTxThreshold = 18<<11, SetTxStart = 19<<11,
+ StartDMAUp = 20<<11, StartDMADown = (20<<11)+1, StatsEnable = 21<<11,
+ StatsDisable = 22<<11, StopCoax = 23<<11, SetFilterBit = 25<<11,};
+
+/* The SetRxFilter command accepts the following classes: */
+enum RxFilter {
+ RxStation = 1, RxMulticast = 2, RxBroadcast = 4, RxProm = 8 };
+
+/* Bits in the general status register. */
+enum vortex_status {
+ IntLatch = 0x0001, HostError = 0x0002, TxComplete = 0x0004,
+ TxAvailable = 0x0008, RxComplete = 0x0010, RxEarly = 0x0020,
+ IntReq = 0x0040, StatsFull = 0x0080,
+ DMADone = 1<<8, DownComplete = 1<<9, UpComplete = 1<<10,
+ DMAInProgress = 1<<11, /* DMA controller is still busy.*/
+ CmdInProgress = 1<<12, /* EL3_CMD is still busy.*/
+};
+
+/* Register window 1 offsets, the window used in normal operation.
+ On the Vortex this window is always mapped at offsets 0x10-0x1f. */
+enum Window1 {
+ TX_FIFO = 0x10, RX_FIFO = 0x10, RxErrors = 0x14,
+ RxStatus = 0x18, Timer=0x1A, TxStatus = 0x1B,
+ TxFree = 0x1C, /* Remaining free bytes in Tx buffer. */
+};
+enum Window0 {
+ Wn0EepromCmd = 10, /* Window 0: EEPROM command register. */
+ Wn0EepromData = 12, /* Window 0: EEPROM results register. */
+ IntrStatus=0x0E, /* Valid in all windows. */
+};
+enum Win0_EEPROM_bits {
+ EEPROM_Read = 0x80, EEPROM_WRITE = 0x40, EEPROM_ERASE = 0xC0,
+ EEPROM_EWENB = 0x30, /* Enable erasing/writing for 10 msec. */
+ EEPROM_EWDIS = 0x00, /* Disable EWENB before 10 msec timeout. */
+};
+/* EEPROM locations. */
+enum eeprom_offset {
+ PhysAddr01=0, PhysAddr23=1, PhysAddr45=2, ModelID=3,
+ EtherLink3ID=7, IFXcvrIO=8, IRQLine=9,
+ NodeAddr01=10, NodeAddr23=11, NodeAddr45=12,
+ DriverTune=13, Checksum=15};
+
+enum Window2 { /* Window 2. */
+ Wn2_ResetOptions=12,
+};
+enum Window3 { /* Window 3: MAC/config bits. */
+ Wn3_Config=0, Wn3_MaxPktSize=4, Wn3_MAC_Ctrl=6, Wn3_Options=8,
+};
+
+#define BFEXT(value, offset, bitcount) \
+ ((((unsigned long)(value)) >> (offset)) & ((1 << (bitcount)) - 1))
+
+#define BFINS(lhs, rhs, offset, bitcount) \
+ (((lhs) & ~((((1 << (bitcount)) - 1)) << (offset))) | \
+ (((rhs) & ((1 << (bitcount)) - 1)) << (offset)))
+
+#define RAM_SIZE(v) BFEXT(v, 0, 3)
+#define RAM_WIDTH(v) BFEXT(v, 3, 1)
+#define RAM_SPEED(v) BFEXT(v, 4, 2)
+#define ROM_SIZE(v) BFEXT(v, 6, 2)
+#define RAM_SPLIT(v) BFEXT(v, 16, 2)
+#define XCVR(v) BFEXT(v, 20, 4)
+#define AUTOSELECT(v) BFEXT(v, 24, 1)
+
+enum Window4 { /* Window 4: Xcvr/media bits. */
+ Wn4_FIFODiag = 4, Wn4_NetDiag = 6, Wn4_PhysicalMgmt=8, Wn4_Media = 10,
+};
+enum Win4_Media_bits {
+ Media_SQE = 0x0008, /* Enable SQE error counting for AUI. */
+ Media_10TP = 0x00C0, /* Enable link beat and jabber for 10baseT. */
+ Media_Lnk = 0x0080, /* Enable just link beat for 100TX/100FX. */
+ Media_LnkBeat = 0x0800,
+};
+enum Window7 { /* Window 7: Bus Master control. */
+ Wn7_MasterAddr = 0, Wn7_VlanEtherType=4, Wn7_MasterLen = 6,
+ Wn7_MasterStatus = 12,
+};
+/* Boomerang bus master control registers. */
+enum MasterCtrl {
+ PktStatus = 0x20, DownListPtr = 0x24, FragAddr = 0x28, FragLen = 0x2c,
+ TxFreeThreshold = 0x2f, UpPktStatus = 0x30, UpListPtr = 0x38,
+};
+
+/* The Rx and Tx descriptor lists.
+ Caution Alpha hackers: these types are 32 bits! Note also the 8 byte
+ alignment contraint on tx_ring[] and rx_ring[]. */
+#define LAST_FRAG 0x80000000 /* Last Addr/Len pair in descriptor. */
+#define DN_COMPLETE 0x00010000 /* This packet has been downloaded */
+struct boom_rx_desc {
+ __le32 next; /* Last entry points to 0. */
+ __le32 status;
+ __le32 addr; /* Up to 63 addr/len pairs possible. */
+ __le32 length; /* Set LAST_FRAG to indicate last pair. */
+};
+/* Values for the Rx status entry. */
+enum rx_desc_status {
+ RxDComplete=0x00008000, RxDError=0x4000,
+ /* See boomerang_rx() for actual error bits */
+ IPChksumErr=1<<25, TCPChksumErr=1<<26, UDPChksumErr=1<<27,
+ IPChksumValid=1<<29, TCPChksumValid=1<<30, UDPChksumValid=1<<31,
+};
+
+#ifdef MAX_SKB_FRAGS
+#define DO_ZEROCOPY 1
+#else
+#define DO_ZEROCOPY 0
+#endif
+
+struct boom_tx_desc {
+ __le32 next; /* Last entry points to 0. */
+ __le32 status; /* bits 0:12 length, others see below. */
+#if DO_ZEROCOPY
+ struct {
+ __le32 addr;
+ __le32 length;
+ } frag[1+MAX_SKB_FRAGS];
+#else
+ __le32 addr;
+ __le32 length;
+#endif
+};
+
+/* Values for the Tx status entry. */
+enum tx_desc_status {
+ CRCDisable=0x2000, TxDComplete=0x8000,
+ AddIPChksum=0x02000000, AddTCPChksum=0x04000000, AddUDPChksum=0x08000000,
+ TxIntrUploaded=0x80000000, /* IRQ when in FIFO, but maybe not sent. */
+};
+
+/* Chip features we care about in vp->capabilities, read from the EEPROM. */
+enum ChipCaps { CapBusMaster=0x20, CapPwrMgmt=0x2000 };
+
+struct vortex_extra_stats {
+ unsigned long tx_deferred;
+ unsigned long tx_max_collisions;
+ unsigned long tx_multiple_collisions;
+ unsigned long tx_single_collisions;
+ unsigned long rx_bad_ssd;
+};
+
+struct vortex_private {
+ /* The Rx and Tx rings should be quad-word-aligned. */
+ struct boom_rx_desc* rx_ring;
+ struct boom_tx_desc* tx_ring;
+ dma_addr_t rx_ring_dma;
+ dma_addr_t tx_ring_dma;
+ /* The addresses of transmit- and receive-in-place skbuffs. */
+ struct sk_buff* rx_skbuff[RX_RING_SIZE];
+ struct sk_buff* tx_skbuff[TX_RING_SIZE];
+ unsigned int cur_rx, cur_tx; /* The next free ring entry */
+ unsigned int dirty_rx, dirty_tx; /* The ring entries to be free()ed. */
+ struct vortex_extra_stats xstats; /* NIC-specific extra stats */
+ struct sk_buff *tx_skb; /* Packet being eaten by bus master ctrl. */
+ dma_addr_t tx_skb_dma; /* Allocated DMA address for bus master ctrl DMA. */
+
+ /* PCI configuration space information. */
+ struct device *gendev;
+ void __iomem *ioaddr; /* IO address space */
+ void __iomem *cb_fn_base; /* CardBus function status addr space. */
+
+ /* Some values here only for performance evaluation and path-coverage */
+ int rx_nocopy, rx_copy, queued_packet, rx_csumhits;
+ int card_idx;
+
+ /* The remainder are related to chip state, mostly media selection. */
+ struct timer_list timer; /* Media selection timer. */
+ struct timer_list rx_oom_timer; /* Rx skb allocation retry timer */
+ int options; /* User-settable misc. driver options. */
+ unsigned int media_override:4, /* Passed-in media type. */
+ default_media:4, /* Read from the EEPROM/Wn3_Config. */
+ full_duplex:1, autoselect:1,
+ bus_master:1, /* Vortex can only do a fragment bus-m. */
+ full_bus_master_tx:1, full_bus_master_rx:2, /* Boomerang */
+ flow_ctrl:1, /* Use 802.3x flow control (PAUSE only) */
+ partner_flow_ctrl:1, /* Partner supports flow control */
+ has_nway:1,
+ enable_wol:1, /* Wake-on-LAN is enabled */
+ pm_state_valid:1, /* pci_dev->saved_config_space has sane contents */
+ open:1,
+ medialock:1,
+ must_free_region:1, /* Flag: if zero, Cardbus owns the I/O region */
+ large_frames:1, /* accept large frames */
+ handling_irq:1; /* private in_irq indicator */
+ /* {get|set}_wol operations are already serialized by rtnl.
+ * no additional locking is required for the enable_wol and acpi_set_WOL()
+ */
+ int drv_flags;
+ u16 status_enable;
+ u16 intr_enable;
+ u16 available_media; /* From Wn3_Options. */
+ u16 capabilities, info1, info2; /* Various, from EEPROM. */
+ u16 advertising; /* NWay media advertisement */
+ unsigned char phys[2]; /* MII device addresses. */
+ u16 deferred; /* Resend these interrupts when we
+ * bale from the ISR */
+ u16 io_size; /* Size of PCI region (for release_region) */
+
+ /* Serialises access to hardware other than MII and variables below.
+ * The lock hierarchy is rtnl_lock > {lock, mii_lock} > window_lock. */
+ spinlock_t lock;
+
+ spinlock_t mii_lock; /* Serialises access to MII */
+ struct mii_if_info mii; /* MII lib hooks/info */
+ spinlock_t window_lock; /* Serialises access to windowed regs */
+ int window; /* Register window */
+};
+
+static void window_set(struct vortex_private *vp, int window)
+{
+ if (window != vp->window) {
+ iowrite16(SelectWindow + window, vp->ioaddr + EL3_CMD);
+ vp->window = window;
+ }
+}
+
+#define DEFINE_WINDOW_IO(size) \
+static u ## size \
+window_read ## size(struct vortex_private *vp, int window, int addr) \
+{ \
+ unsigned long flags; \
+ u ## size ret; \
+ spin_lock_irqsave(&vp->window_lock, flags); \
+ window_set(vp, window); \
+ ret = ioread ## size(vp->ioaddr + addr); \
+ spin_unlock_irqrestore(&vp->window_lock, flags); \
+ return ret; \
+} \
+static void \
+window_write ## size(struct vortex_private *vp, u ## size value, \
+ int window, int addr) \
+{ \
+ unsigned long flags; \
+ spin_lock_irqsave(&vp->window_lock, flags); \
+ window_set(vp, window); \
+ iowrite ## size(value, vp->ioaddr + addr); \
+ spin_unlock_irqrestore(&vp->window_lock, flags); \
+}
+DEFINE_WINDOW_IO(8)
+DEFINE_WINDOW_IO(16)
+DEFINE_WINDOW_IO(32)
+
+#ifdef CONFIG_PCI
+#define DEVICE_PCI(dev) (((dev)->bus == &pci_bus_type) ? to_pci_dev((dev)) : NULL)
+#else
+#define DEVICE_PCI(dev) NULL
+#endif
+
+#define VORTEX_PCI(vp) \
+ ((struct pci_dev *) (((vp)->gendev) ? DEVICE_PCI((vp)->gendev) : NULL))
+
+#ifdef CONFIG_EISA
+#define DEVICE_EISA(dev) (((dev)->bus == &eisa_bus_type) ? to_eisa_device((dev)) : NULL)
+#else
+#define DEVICE_EISA(dev) NULL
+#endif
+
+#define VORTEX_EISA(vp) \
+ ((struct eisa_device *) (((vp)->gendev) ? DEVICE_EISA((vp)->gendev) : NULL))
+
+/* The action to take with a media selection timer tick.
+ Note that we deviate from the 3Com order by checking 10base2 before AUI.
+ */
+enum xcvr_types {
+ XCVR_10baseT=0, XCVR_AUI, XCVR_10baseTOnly, XCVR_10base2, XCVR_100baseTx,
+ XCVR_100baseFx, XCVR_MII=6, XCVR_NWAY=8, XCVR_ExtMII=9, XCVR_Default=10,
+};
+
+static const struct media_table {
+ char *name;
+ unsigned int media_bits:16, /* Bits to set in Wn4_Media register. */
+ mask:8, /* The transceiver-present bit in Wn3_Config.*/
+ next:8; /* The media type to try next. */
+ int wait; /* Time before we check media status. */
+} media_tbl[] = {
+ { "10baseT", Media_10TP,0x08, XCVR_10base2, (14*HZ)/10},
+ { "10Mbs AUI", Media_SQE, 0x20, XCVR_Default, (1*HZ)/10},
+ { "undefined", 0, 0x80, XCVR_10baseT, 10000},
+ { "10base2", 0, 0x10, XCVR_AUI, (1*HZ)/10},
+ { "100baseTX", Media_Lnk, 0x02, XCVR_100baseFx, (14*HZ)/10},
+ { "100baseFX", Media_Lnk, 0x04, XCVR_MII, (14*HZ)/10},
+ { "MII", 0, 0x41, XCVR_10baseT, 3*HZ },
+ { "undefined", 0, 0x01, XCVR_10baseT, 10000},
+ { "Autonegotiate", 0, 0x41, XCVR_10baseT, 3*HZ},
+ { "MII-External", 0, 0x41, XCVR_10baseT, 3*HZ },
+ { "Default", 0, 0xFF, XCVR_10baseT, 10000},
+};
+
+static struct {
+ const char str[ETH_GSTRING_LEN];
+} ethtool_stats_keys[] = {
+ { "tx_deferred" },
+ { "tx_max_collisions" },
+ { "tx_multiple_collisions" },
+ { "tx_single_collisions" },
+ { "rx_bad_ssd" },
+};
+
+/* number of ETHTOOL_GSTATS u64's */
+#define VORTEX_NUM_STATS 5
+
+static int vortex_probe1(struct device *gendev, void __iomem *ioaddr, int irq,
+ int chip_idx, int card_idx);
+static int vortex_up(struct net_device *dev);
+static void vortex_down(struct net_device *dev, int final);
+static int vortex_open(struct net_device *dev);
+static void mdio_sync(struct vortex_private *vp, int bits);
+static int mdio_read(struct net_device *dev, int phy_id, int location);
+static void mdio_write(struct net_device *vp, int phy_id, int location, int value);
+static void vortex_timer(unsigned long arg);
+static void rx_oom_timer(unsigned long arg);
+static netdev_tx_t vortex_start_xmit(struct sk_buff *skb,
+ struct net_device *dev);
+static netdev_tx_t boomerang_start_xmit(struct sk_buff *skb,
+ struct net_device *dev);
+static int vortex_rx(struct net_device *dev);
+static int boomerang_rx(struct net_device *dev);
+static irqreturn_t vortex_interrupt(int irq, void *dev_id);
+static irqreturn_t boomerang_interrupt(int irq, void *dev_id);
+static int vortex_close(struct net_device *dev);
+static void dump_tx_ring(struct net_device *dev);
+static void update_stats(void __iomem *ioaddr, struct net_device *dev);
+static struct net_device_stats *vortex_get_stats(struct net_device *dev);
+static void set_rx_mode(struct net_device *dev);
+#ifdef CONFIG_PCI
+static int vortex_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
+#endif
+static void vortex_tx_timeout(struct net_device *dev);
+static void acpi_set_WOL(struct net_device *dev);
+static const struct ethtool_ops vortex_ethtool_ops;
+static void set_8021q_mode(struct net_device *dev, int enable);
+
+/* This driver uses 'options' to pass the media type, full-duplex flag, etc. */
+/* Option count limit only -- unlimited interfaces are supported. */
+#define MAX_UNITS 8
+static int options[MAX_UNITS] = { [0 ... MAX_UNITS-1] = -1 };
+static int full_duplex[MAX_UNITS] = {[0 ... MAX_UNITS-1] = -1 };
+static int hw_checksums[MAX_UNITS] = {[0 ... MAX_UNITS-1] = -1 };
+static int flow_ctrl[MAX_UNITS] = {[0 ... MAX_UNITS-1] = -1 };
+static int enable_wol[MAX_UNITS] = {[0 ... MAX_UNITS-1] = -1 };
+static int use_mmio[MAX_UNITS] = {[0 ... MAX_UNITS-1] = -1 };
+static int global_options = -1;
+static int global_full_duplex = -1;
+static int global_enable_wol = -1;
+static int global_use_mmio = -1;
+
+/* Variables to work-around the Compaq PCI BIOS32 problem. */
+static int compaq_ioaddr, compaq_irq, compaq_device_id = 0x5900;
+static struct net_device *compaq_net_device;
+
+static int vortex_cards_found;
+
+module_param(debug, int, 0);
+module_param(global_options, int, 0);
+module_param_array(options, int, NULL, 0);
+module_param(global_full_duplex, int, 0);
+module_param_array(full_duplex, int, NULL, 0);
+module_param_array(hw_checksums, int, NULL, 0);
+module_param_array(flow_ctrl, int, NULL, 0);
+module_param(global_enable_wol, int, 0);
+module_param_array(enable_wol, int, NULL, 0);
+module_param(rx_copybreak, int, 0);
+module_param(max_interrupt_work, int, 0);
+module_param(compaq_ioaddr, int, 0);
+module_param(compaq_irq, int, 0);
+module_param(compaq_device_id, int, 0);
+module_param(watchdog, int, 0);
+module_param(global_use_mmio, int, 0);
+module_param_array(use_mmio, int, NULL, 0);
+MODULE_PARM_DESC(debug, "3c59x debug level (0-6)");
+MODULE_PARM_DESC(options, "3c59x: Bits 0-3: media type, bit 4: bus mastering, bit 9: full duplex");
+MODULE_PARM_DESC(global_options, "3c59x: same as options, but applies to all NICs if options is unset");
+MODULE_PARM_DESC(full_duplex, "3c59x full duplex setting(s) (1)");
+MODULE_PARM_DESC(global_full_duplex, "3c59x: same as full_duplex, but applies to all NICs if full_duplex is unset");
+MODULE_PARM_DESC(hw_checksums, "3c59x Hardware checksum checking by adapter(s) (0-1)");
+MODULE_PARM_DESC(flow_ctrl, "3c59x 802.3x flow control usage (PAUSE only) (0-1)");
+MODULE_PARM_DESC(enable_wol, "3c59x: Turn on Wake-on-LAN for adapter(s) (0-1)");
+MODULE_PARM_DESC(global_enable_wol, "3c59x: same as enable_wol, but applies to all NICs if enable_wol is unset");
+MODULE_PARM_DESC(rx_copybreak, "3c59x copy breakpoint for copy-only-tiny-frames");
+MODULE_PARM_DESC(max_interrupt_work, "3c59x maximum events handled per interrupt");
+MODULE_PARM_DESC(compaq_ioaddr, "3c59x PCI I/O base address (Compaq BIOS problem workaround)");
+MODULE_PARM_DESC(compaq_irq, "3c59x PCI IRQ number (Compaq BIOS problem workaround)");
+MODULE_PARM_DESC(compaq_device_id, "3c59x PCI device ID (Compaq BIOS problem workaround)");
+MODULE_PARM_DESC(watchdog, "3c59x transmit timeout in milliseconds");
+MODULE_PARM_DESC(global_use_mmio, "3c59x: same as use_mmio, but applies to all NICs if options is unset");
+MODULE_PARM_DESC(use_mmio, "3c59x: use memory-mapped PCI I/O resource (0-1)");
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+static void poll_vortex(struct net_device *dev)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ unsigned long flags;
+ local_irq_save(flags);
+ (vp->full_bus_master_rx ? boomerang_interrupt:vortex_interrupt)(dev->irq,dev);
+ local_irq_restore(flags);
+}
+#endif
+
+#ifdef CONFIG_PM
+
+static int vortex_suspend(struct device *dev)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct net_device *ndev = pci_get_drvdata(pdev);
+
+ if (!ndev || !netif_running(ndev))
+ return 0;
+
+ netif_device_detach(ndev);
+ vortex_down(ndev, 1);
+
+ return 0;
+}
+
+static int vortex_resume(struct device *dev)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct net_device *ndev = pci_get_drvdata(pdev);
+ int err;
+
+ if (!ndev || !netif_running(ndev))
+ return 0;
+
+ err = vortex_up(ndev);
+ if (err)
+ return err;
+
+ netif_device_attach(ndev);
+
+ return 0;
+}
+
+static const struct dev_pm_ops vortex_pm_ops = {
+ .suspend = vortex_suspend,
+ .resume = vortex_resume,
+ .freeze = vortex_suspend,
+ .thaw = vortex_resume,
+ .poweroff = vortex_suspend,
+ .restore = vortex_resume,
+};
+
+#define VORTEX_PM_OPS (&vortex_pm_ops)
+
+#else /* !CONFIG_PM */
+
+#define VORTEX_PM_OPS NULL
+
+#endif /* !CONFIG_PM */
+
+#ifdef CONFIG_EISA
+static struct eisa_device_id vortex_eisa_ids[] = {
+ { "TCM5920", CH_3C592 },
+ { "TCM5970", CH_3C597 },
+ { "" }
+};
+MODULE_DEVICE_TABLE(eisa, vortex_eisa_ids);
+
+static int __init vortex_eisa_probe(struct device *device)
+{
+ void __iomem *ioaddr;
+ struct eisa_device *edev;
+
+ edev = to_eisa_device(device);
+
+ if (!request_region(edev->base_addr, VORTEX_TOTAL_SIZE, DRV_NAME))
+ return -EBUSY;
+
+ ioaddr = ioport_map(edev->base_addr, VORTEX_TOTAL_SIZE);
+
+ if (vortex_probe1(device, ioaddr, ioread16(ioaddr + 0xC88) >> 12,
+ edev->id.driver_data, vortex_cards_found)) {
+ release_region(edev->base_addr, VORTEX_TOTAL_SIZE);
+ return -ENODEV;
+ }
+
+ vortex_cards_found++;
+
+ return 0;
+}
+
+static int __devexit vortex_eisa_remove(struct device *device)
+{
+ struct eisa_device *edev;
+ struct net_device *dev;
+ struct vortex_private *vp;
+ void __iomem *ioaddr;
+
+ edev = to_eisa_device(device);
+ dev = eisa_get_drvdata(edev);
+
+ if (!dev) {
+ pr_err("vortex_eisa_remove called for Compaq device!\n");
+ BUG();
+ }
+
+ vp = netdev_priv(dev);
+ ioaddr = vp->ioaddr;
+
+ unregister_netdev(dev);
+ iowrite16(TotalReset|0x14, ioaddr + EL3_CMD);
+ release_region(dev->base_addr, VORTEX_TOTAL_SIZE);
+
+ free_netdev(dev);
+ return 0;
+}
+
+static struct eisa_driver vortex_eisa_driver = {
+ .id_table = vortex_eisa_ids,
+ .driver = {
+ .name = "3c59x",
+ .probe = vortex_eisa_probe,
+ .remove = __devexit_p(vortex_eisa_remove)
+ }
+};
+
+#endif /* CONFIG_EISA */
+
+/* returns count found (>= 0), or negative on error */
+static int __init vortex_eisa_init(void)
+{
+ int eisa_found = 0;
+ int orig_cards_found = vortex_cards_found;
+
+#ifdef CONFIG_EISA
+ int err;
+
+ err = eisa_driver_register (&vortex_eisa_driver);
+ if (!err) {
+ /*
+ * Because of the way EISA bus is probed, we cannot assume
+ * any device have been found when we exit from
+ * eisa_driver_register (the bus root driver may not be
+ * initialized yet). So we blindly assume something was
+ * found, and let the sysfs magic happened...
+ */
+ eisa_found = 1;
+ }
+#endif
+
+ /* Special code to work-around the Compaq PCI BIOS32 problem. */
+ if (compaq_ioaddr) {
+ vortex_probe1(NULL, ioport_map(compaq_ioaddr, VORTEX_TOTAL_SIZE),
+ compaq_irq, compaq_device_id, vortex_cards_found++);
+ }
+
+ return vortex_cards_found - orig_cards_found + eisa_found;
+}
+
+/* returns count (>= 0), or negative on error */
+static int __devinit vortex_init_one(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ int rc, unit, pci_bar;
+ struct vortex_chip_info *vci;
+ void __iomem *ioaddr;
+
+ /* wake up and enable device */
+ rc = pci_enable_device(pdev);
+ if (rc < 0)
+ goto out;
+
+ unit = vortex_cards_found;
+
+ if (global_use_mmio < 0 && (unit >= MAX_UNITS || use_mmio[unit] < 0)) {
+ /* Determine the default if the user didn't override us */
+ vci = &vortex_info_tbl[ent->driver_data];
+ pci_bar = vci->drv_flags & (IS_CYCLONE | IS_TORNADO) ? 1 : 0;
+ } else if (unit < MAX_UNITS && use_mmio[unit] >= 0)
+ pci_bar = use_mmio[unit] ? 1 : 0;
+ else
+ pci_bar = global_use_mmio ? 1 : 0;
+
+ ioaddr = pci_iomap(pdev, pci_bar, 0);
+ if (!ioaddr) /* If mapping fails, fall-back to BAR 0... */
+ ioaddr = pci_iomap(pdev, 0, 0);
+ if (!ioaddr) {
+ pci_disable_device(pdev);
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ rc = vortex_probe1(&pdev->dev, ioaddr, pdev->irq,
+ ent->driver_data, unit);
+ if (rc < 0) {
+ pci_iounmap(pdev, ioaddr);
+ pci_disable_device(pdev);
+ goto out;
+ }
+
+ vortex_cards_found++;
+
+out:
+ return rc;
+}
+
+static const struct net_device_ops boomrang_netdev_ops = {
+ .ndo_open = vortex_open,
+ .ndo_stop = vortex_close,
+ .ndo_start_xmit = boomerang_start_xmit,
+ .ndo_tx_timeout = vortex_tx_timeout,
+ .ndo_get_stats = vortex_get_stats,
+#ifdef CONFIG_PCI
+ .ndo_do_ioctl = vortex_ioctl,
+#endif
+ .ndo_set_multicast_list = set_rx_mode,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = poll_vortex,
+#endif
+};
+
+static const struct net_device_ops vortex_netdev_ops = {
+ .ndo_open = vortex_open,
+ .ndo_stop = vortex_close,
+ .ndo_start_xmit = vortex_start_xmit,
+ .ndo_tx_timeout = vortex_tx_timeout,
+ .ndo_get_stats = vortex_get_stats,
+#ifdef CONFIG_PCI
+ .ndo_do_ioctl = vortex_ioctl,
+#endif
+ .ndo_set_multicast_list = set_rx_mode,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = poll_vortex,
+#endif
+};
+
+/*
+ * Start up the PCI/EISA device which is described by *gendev.
+ * Return 0 on success.
+ *
+ * NOTE: pdev can be NULL, for the case of a Compaq device
+ */
+static int __devinit vortex_probe1(struct device *gendev,
+ void __iomem *ioaddr, int irq,
+ int chip_idx, int card_idx)
+{
+ struct vortex_private *vp;
+ int option;
+ unsigned int eeprom[0x40], checksum = 0; /* EEPROM contents */
+ int i, step;
+ struct net_device *dev;
+ static int printed_version;
+ int retval, print_info;
+ struct vortex_chip_info * const vci = &vortex_info_tbl[chip_idx];
+ const char *print_name = "3c59x";
+ struct pci_dev *pdev = NULL;
+ struct eisa_device *edev = NULL;
+
+ if (!printed_version) {
+ pr_info("%s", version);
+ printed_version = 1;
+ }
+
+ if (gendev) {
+ if ((pdev = DEVICE_PCI(gendev))) {
+ print_name = pci_name(pdev);
+ }
+
+ if ((edev = DEVICE_EISA(gendev))) {
+ print_name = dev_name(&edev->dev);
+ }
+ }
+
+ dev = alloc_etherdev(sizeof(*vp));
+ retval = -ENOMEM;
+ if (!dev) {
+ pr_err(PFX "unable to allocate etherdev, aborting\n");
+ goto out;
+ }
+ SET_NETDEV_DEV(dev, gendev);
+ vp = netdev_priv(dev);
+
+ option = global_options;
+
+ /* The lower four bits are the media type. */
+ if (dev->mem_start) {
+ /*
+ * The 'options' param is passed in as the third arg to the
+ * LILO 'ether=' argument for non-modular use
+ */
+ option = dev->mem_start;
+ }
+ else if (card_idx < MAX_UNITS) {
+ if (options[card_idx] >= 0)
+ option = options[card_idx];
+ }
+
+ if (option > 0) {
+ if (option & 0x8000)
+ vortex_debug = 7;
+ if (option & 0x4000)
+ vortex_debug = 2;
+ if (option & 0x0400)
+ vp->enable_wol = 1;
+ }
+
+ print_info = (vortex_debug > 1);
+ if (print_info)
+ pr_info("See Documentation/networking/vortex.txt\n");
+
+ pr_info("%s: 3Com %s %s at %p.\n",
+ print_name,
+ pdev ? "PCI" : "EISA",
+ vci->name,
+ ioaddr);
+
+ dev->base_addr = (unsigned long)ioaddr;
+ dev->irq = irq;
+ dev->mtu = mtu;
+ vp->ioaddr = ioaddr;
+ vp->large_frames = mtu > 1500;
+ vp->drv_flags = vci->drv_flags;
+ vp->has_nway = (vci->drv_flags & HAS_NWAY) ? 1 : 0;
+ vp->io_size = vci->io_size;
+ vp->card_idx = card_idx;
+ vp->window = -1;
+
+ /* module list only for Compaq device */
+ if (gendev == NULL) {
+ compaq_net_device = dev;
+ }
+
+ /* PCI-only startup logic */
+ if (pdev) {
+ /* EISA resources already marked, so only PCI needs to do this here */
+ /* Ignore return value, because Cardbus drivers already allocate for us */
+ if (request_region(dev->base_addr, vci->io_size, print_name) != NULL)
+ vp->must_free_region = 1;
+
+ /* enable bus-mastering if necessary */
+ if (vci->flags & PCI_USES_MASTER)
+ pci_set_master(pdev);
+
+ if (vci->drv_flags & IS_VORTEX) {
+ u8 pci_latency;
+ u8 new_latency = 248;
+
+ /* Check the PCI latency value. On the 3c590 series the latency timer
+ must be set to the maximum value to avoid data corruption that occurs
+ when the timer expires during a transfer. This bug exists the Vortex
+ chip only. */
+ pci_read_config_byte(pdev, PCI_LATENCY_TIMER, &pci_latency);
+ if (pci_latency < new_latency) {
+ pr_info("%s: Overriding PCI latency timer (CFLT) setting of %d, new value is %d.\n",
+ print_name, pci_latency, new_latency);
+ pci_write_config_byte(pdev, PCI_LATENCY_TIMER, new_latency);
+ }
+ }
+ }
+
+ spin_lock_init(&vp->lock);
+ spin_lock_init(&vp->mii_lock);
+ spin_lock_init(&vp->window_lock);
+ vp->gendev = gendev;
+ vp->mii.dev = dev;
+ vp->mii.mdio_read = mdio_read;
+ vp->mii.mdio_write = mdio_write;
+ vp->mii.phy_id_mask = 0x1f;
+ vp->mii.reg_num_mask = 0x1f;
+
+ /* Makes sure rings are at least 16 byte aligned. */
+ vp->rx_ring = pci_alloc_consistent(pdev, sizeof(struct boom_rx_desc) * RX_RING_SIZE
+ + sizeof(struct boom_tx_desc) * TX_RING_SIZE,
+ &vp->rx_ring_dma);
+ retval = -ENOMEM;
+ if (!vp->rx_ring)
+ goto free_region;
+
+ vp->tx_ring = (struct boom_tx_desc *)(vp->rx_ring + RX_RING_SIZE);
+ vp->tx_ring_dma = vp->rx_ring_dma + sizeof(struct boom_rx_desc) * RX_RING_SIZE;
+
+ /* if we are a PCI driver, we store info in pdev->driver_data
+ * instead of a module list */
+ if (pdev)
+ pci_set_drvdata(pdev, dev);
+ if (edev)
+ eisa_set_drvdata(edev, dev);
+
+ vp->media_override = 7;
+ if (option >= 0) {
+ vp->media_override = ((option & 7) == 2) ? 0 : option & 15;
+ if (vp->media_override != 7)
+ vp->medialock = 1;
+ vp->full_duplex = (option & 0x200) ? 1 : 0;
+ vp->bus_master = (option & 16) ? 1 : 0;
+ }
+
+ if (global_full_duplex > 0)
+ vp->full_duplex = 1;
+ if (global_enable_wol > 0)
+ vp->enable_wol = 1;
+
+ if (card_idx < MAX_UNITS) {
+ if (full_duplex[card_idx] > 0)
+ vp->full_duplex = 1;
+ if (flow_ctrl[card_idx] > 0)
+ vp->flow_ctrl = 1;
+ if (enable_wol[card_idx] > 0)
+ vp->enable_wol = 1;
+ }
+
+ vp->mii.force_media = vp->full_duplex;
+ vp->options = option;
+ /* Read the station address from the EEPROM. */
+ {
+ int base;
+
+ if (vci->drv_flags & EEPROM_8BIT)
+ base = 0x230;
+ else if (vci->drv_flags & EEPROM_OFFSET)
+ base = EEPROM_Read + 0x30;
+ else
+ base = EEPROM_Read;
+
+ for (i = 0; i < 0x40; i++) {
+ int timer;
+ window_write16(vp, base + i, 0, Wn0EepromCmd);
+ /* Pause for at least 162 us. for the read to take place. */
+ for (timer = 10; timer >= 0; timer--) {
+ udelay(162);
+ if ((window_read16(vp, 0, Wn0EepromCmd) &
+ 0x8000) == 0)
+ break;
+ }
+ eeprom[i] = window_read16(vp, 0, Wn0EepromData);
+ }
+ }
+ for (i = 0; i < 0x18; i++)
+ checksum ^= eeprom[i];
+ checksum = (checksum ^ (checksum >> 8)) & 0xff;
+ if (checksum != 0x00) { /* Grrr, needless incompatible change 3Com. */
+ while (i < 0x21)
+ checksum ^= eeprom[i++];
+ checksum = (checksum ^ (checksum >> 8)) & 0xff;
+ }
+ if ((checksum != 0x00) && !(vci->drv_flags & IS_TORNADO))
+ pr_cont(" ***INVALID CHECKSUM %4.4x*** ", checksum);
+ for (i = 0; i < 3; i++)
+ ((__be16 *)dev->dev_addr)[i] = htons(eeprom[i + 10]);
+ memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len);
+ if (print_info)
+ pr_cont(" %pM", dev->dev_addr);
+ /* Unfortunately an all zero eeprom passes the checksum and this
+ gets found in the wild in failure cases. Crypto is hard 8) */
+ if (!is_valid_ether_addr(dev->dev_addr)) {
+ retval = -EINVAL;
+ pr_err("*** EEPROM MAC address is invalid.\n");
+ goto free_ring; /* With every pack */
+ }
+ for (i = 0; i < 6; i++)
+ window_write8(vp, dev->dev_addr[i], 2, i);
+
+ if (print_info)
+ pr_cont(", IRQ %d\n", dev->irq);
+ /* Tell them about an invalid IRQ. */
+ if (dev->irq <= 0 || dev->irq >= nr_irqs)
+ pr_warning(" *** Warning: IRQ %d is unlikely to work! ***\n",
+ dev->irq);
+
+ step = (window_read8(vp, 4, Wn4_NetDiag) & 0x1e) >> 1;
+ if (print_info) {
+ pr_info(" product code %02x%02x rev %02x.%d date %02d-%02d-%02d\n",
+ eeprom[6]&0xff, eeprom[6]>>8, eeprom[0x14],
+ step, (eeprom[4]>>5) & 15, eeprom[4] & 31, eeprom[4]>>9);
+ }
+
+
+ if (pdev && vci->drv_flags & HAS_CB_FNS) {
+ unsigned short n;
+
+ vp->cb_fn_base = pci_iomap(pdev, 2, 0);
+ if (!vp->cb_fn_base) {
+ retval = -ENOMEM;
+ goto free_ring;
+ }
+
+ if (print_info) {
+ pr_info("%s: CardBus functions mapped %16.16llx->%p\n",
+ print_name,
+ (unsigned long long)pci_resource_start(pdev, 2),
+ vp->cb_fn_base);
+ }
+
+ n = window_read16(vp, 2, Wn2_ResetOptions) & ~0x4010;
+ if (vp->drv_flags & INVERT_LED_PWR)
+ n |= 0x10;
+ if (vp->drv_flags & INVERT_MII_PWR)
+ n |= 0x4000;
+ window_write16(vp, n, 2, Wn2_ResetOptions);
+ if (vp->drv_flags & WNO_XCVR_PWR) {
+ window_write16(vp, 0x0800, 0, 0);
+ }
+ }
+
+ /* Extract our information from the EEPROM data. */
+ vp->info1 = eeprom[13];
+ vp->info2 = eeprom[15];
+ vp->capabilities = eeprom[16];
+
+ if (vp->info1 & 0x8000) {
+ vp->full_duplex = 1;
+ if (print_info)
+ pr_info("Full duplex capable\n");
+ }
+
+ {
+ static const char * const ram_split[] = {"5:3", "3:1", "1:1", "3:5"};
+ unsigned int config;
+ vp->available_media = window_read16(vp, 3, Wn3_Options);
+ if ((vp->available_media & 0xff) == 0) /* Broken 3c916 */
+ vp->available_media = 0x40;
+ config = window_read32(vp, 3, Wn3_Config);
+ if (print_info) {
+ pr_debug(" Internal config register is %4.4x, transceivers %#x.\n",
+ config, window_read16(vp, 3, Wn3_Options));
+ pr_info(" %dK %s-wide RAM %s Rx:Tx split, %s%s interface.\n",
+ 8 << RAM_SIZE(config),
+ RAM_WIDTH(config) ? "word" : "byte",
+ ram_split[RAM_SPLIT(config)],
+ AUTOSELECT(config) ? "autoselect/" : "",
+ XCVR(config) > XCVR_ExtMII ? "<invalid transceiver>" :
+ media_tbl[XCVR(config)].name);
+ }
+ vp->default_media = XCVR(config);
+ if (vp->default_media == XCVR_NWAY)
+ vp->has_nway = 1;
+ vp->autoselect = AUTOSELECT(config);
+ }
+
+ if (vp->media_override != 7) {
+ pr_info("%s: Media override to transceiver type %d (%s).\n",
+ print_name, vp->media_override,
+ media_tbl[vp->media_override].name);
+ dev->if_port = vp->media_override;
+ } else
+ dev->if_port = vp->default_media;
+
+ if ((vp->available_media & 0x40) || (vci->drv_flags & HAS_NWAY) ||
+ dev->if_port == XCVR_MII || dev->if_port == XCVR_NWAY) {
+ int phy, phy_idx = 0;
+ mii_preamble_required++;
+ if (vp->drv_flags & EXTRA_PREAMBLE)
+ mii_preamble_required++;
+ mdio_sync(vp, 32);
+ mdio_read(dev, 24, MII_BMSR);
+ for (phy = 0; phy < 32 && phy_idx < 1; phy++) {
+ int mii_status, phyx;
+
+ /*
+ * For the 3c905CX we look at index 24 first, because it bogusly
+ * reports an external PHY at all indices
+ */
+ if (phy == 0)
+ phyx = 24;
+ else if (phy <= 24)
+ phyx = phy - 1;
+ else
+ phyx = phy;
+ mii_status = mdio_read(dev, phyx, MII_BMSR);
+ if (mii_status && mii_status != 0xffff) {
+ vp->phys[phy_idx++] = phyx;
+ if (print_info) {
+ pr_info(" MII transceiver found at address %d, status %4x.\n",
+ phyx, mii_status);
+ }
+ if ((mii_status & 0x0040) == 0)
+ mii_preamble_required++;
+ }
+ }
+ mii_preamble_required--;
+ if (phy_idx == 0) {
+ pr_warning(" ***WARNING*** No MII transceivers found!\n");
+ vp->phys[0] = 24;
+ } else {
+ vp->advertising = mdio_read(dev, vp->phys[0], MII_ADVERTISE);
+ if (vp->full_duplex) {
+ /* Only advertise the FD media types. */
+ vp->advertising &= ~0x02A0;
+ mdio_write(dev, vp->phys[0], 4, vp->advertising);
+ }
+ }
+ vp->mii.phy_id = vp->phys[0];
+ }
+
+ if (vp->capabilities & CapBusMaster) {
+ vp->full_bus_master_tx = 1;
+ if (print_info) {
+ pr_info(" Enabling bus-master transmits and %s receives.\n",
+ (vp->info2 & 1) ? "early" : "whole-frame" );
+ }
+ vp->full_bus_master_rx = (vp->info2 & 1) ? 1 : 2;
+ vp->bus_master = 0; /* AKPM: vortex only */
+ }
+
+ /* The 3c59x-specific entries in the device structure. */
+ if (vp->full_bus_master_tx) {
+ dev->netdev_ops = &boomrang_netdev_ops;
+ /* Actually, it still should work with iommu. */
+ if (card_idx < MAX_UNITS &&
+ ((hw_checksums[card_idx] == -1 && (vp->drv_flags & HAS_HWCKSM)) ||
+ hw_checksums[card_idx] == 1)) {
+ dev->features |= NETIF_F_IP_CSUM | NETIF_F_SG;
+ }
+ } else
+ dev->netdev_ops = &vortex_netdev_ops;
+
+ if (print_info) {
+ pr_info("%s: scatter/gather %sabled. h/w checksums %sabled\n",
+ print_name,
+ (dev->features & NETIF_F_SG) ? "en":"dis",
+ (dev->features & NETIF_F_IP_CSUM) ? "en":"dis");
+ }
+
+ dev->ethtool_ops = &vortex_ethtool_ops;
+ dev->watchdog_timeo = (watchdog * HZ) / 1000;
+
+ if (pdev) {
+ vp->pm_state_valid = 1;
+ pci_save_state(VORTEX_PCI(vp));
+ acpi_set_WOL(dev);
+ }
+ retval = register_netdev(dev);
+ if (retval == 0)
+ return 0;
+
+free_ring:
+ pci_free_consistent(pdev,
+ sizeof(struct boom_rx_desc) * RX_RING_SIZE
+ + sizeof(struct boom_tx_desc) * TX_RING_SIZE,
+ vp->rx_ring,
+ vp->rx_ring_dma);
+free_region:
+ if (vp->must_free_region)
+ release_region(dev->base_addr, vci->io_size);
+ free_netdev(dev);
+ pr_err(PFX "vortex_probe1 fails. Returns %d\n", retval);
+out:
+ return retval;
+}
+
+static void
+issue_and_wait(struct net_device *dev, int cmd)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr = vp->ioaddr;
+ int i;
+
+ iowrite16(cmd, ioaddr + EL3_CMD);
+ for (i = 0; i < 2000; i++) {
+ if (!(ioread16(ioaddr + EL3_STATUS) & CmdInProgress))
+ return;
+ }
+
+ /* OK, that didn't work. Do it the slow way. One second */
+ for (i = 0; i < 100000; i++) {
+ if (!(ioread16(ioaddr + EL3_STATUS) & CmdInProgress)) {
+ if (vortex_debug > 1)
+ pr_info("%s: command 0x%04x took %d usecs\n",
+ dev->name, cmd, i * 10);
+ return;
+ }
+ udelay(10);
+ }
+ pr_err("%s: command 0x%04x did not complete! Status=0x%x\n",
+ dev->name, cmd, ioread16(ioaddr + EL3_STATUS));
+}
+
+static void
+vortex_set_duplex(struct net_device *dev)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+
+ pr_info("%s: setting %s-duplex.\n",
+ dev->name, (vp->full_duplex) ? "full" : "half");
+
+ /* Set the full-duplex bit. */
+ window_write16(vp,
+ ((vp->info1 & 0x8000) || vp->full_duplex ? 0x20 : 0) |
+ (vp->large_frames ? 0x40 : 0) |
+ ((vp->full_duplex && vp->flow_ctrl && vp->partner_flow_ctrl) ?
+ 0x100 : 0),
+ 3, Wn3_MAC_Ctrl);
+}
+
+static void vortex_check_media(struct net_device *dev, unsigned int init)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ unsigned int ok_to_print = 0;
+
+ if (vortex_debug > 3)
+ ok_to_print = 1;
+
+ if (mii_check_media(&vp->mii, ok_to_print, init)) {
+ vp->full_duplex = vp->mii.full_duplex;
+ vortex_set_duplex(dev);
+ } else if (init) {
+ vortex_set_duplex(dev);
+ }
+}
+
+static int
+vortex_up(struct net_device *dev)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr = vp->ioaddr;
+ unsigned int config;
+ int i, mii_reg1, mii_reg5, err = 0;
+
+ if (VORTEX_PCI(vp)) {
+ pci_set_power_state(VORTEX_PCI(vp), PCI_D0); /* Go active */
+ if (vp->pm_state_valid)
+ pci_restore_state(VORTEX_PCI(vp));
+ err = pci_enable_device(VORTEX_PCI(vp));
+ if (err) {
+ pr_warning("%s: Could not enable device\n",
+ dev->name);
+ goto err_out;
+ }
+ }
+
+ /* Before initializing select the active media port. */
+ config = window_read32(vp, 3, Wn3_Config);
+
+ if (vp->media_override != 7) {
+ pr_info("%s: Media override to transceiver %d (%s).\n",
+ dev->name, vp->media_override,
+ media_tbl[vp->media_override].name);
+ dev->if_port = vp->media_override;
+ } else if (vp->autoselect) {
+ if (vp->has_nway) {
+ if (vortex_debug > 1)
+ pr_info("%s: using NWAY device table, not %d\n",
+ dev->name, dev->if_port);
+ dev->if_port = XCVR_NWAY;
+ } else {
+ /* Find first available media type, starting with 100baseTx. */
+ dev->if_port = XCVR_100baseTx;
+ while (! (vp->available_media & media_tbl[dev->if_port].mask))
+ dev->if_port = media_tbl[dev->if_port].next;
+ if (vortex_debug > 1)
+ pr_info("%s: first available media type: %s\n",
+ dev->name, media_tbl[dev->if_port].name);
+ }
+ } else {
+ dev->if_port = vp->default_media;
+ if (vortex_debug > 1)
+ pr_info("%s: using default media %s\n",
+ dev->name, media_tbl[dev->if_port].name);
+ }
+
+ init_timer(&vp->timer);
+ vp->timer.expires = RUN_AT(media_tbl[dev->if_port].wait);
+ vp->timer.data = (unsigned long)dev;
+ vp->timer.function = vortex_timer; /* timer handler */
+ add_timer(&vp->timer);
+
+ init_timer(&vp->rx_oom_timer);
+ vp->rx_oom_timer.data = (unsigned long)dev;
+ vp->rx_oom_timer.function = rx_oom_timer;
+
+ if (vortex_debug > 1)
+ pr_debug("%s: Initial media type %s.\n",
+ dev->name, media_tbl[dev->if_port].name);
+
+ vp->full_duplex = vp->mii.force_media;
+ config = BFINS(config, dev->if_port, 20, 4);
+ if (vortex_debug > 6)
+ pr_debug("vortex_up(): writing 0x%x to InternalConfig\n", config);
+ window_write32(vp, config, 3, Wn3_Config);
+
+ if (dev->if_port == XCVR_MII || dev->if_port == XCVR_NWAY) {
+ mii_reg1 = mdio_read(dev, vp->phys[0], MII_BMSR);
+ mii_reg5 = mdio_read(dev, vp->phys[0], MII_LPA);
+ vp->partner_flow_ctrl = ((mii_reg5 & 0x0400) != 0);
+ vp->mii.full_duplex = vp->full_duplex;
+
+ vortex_check_media(dev, 1);
+ }
+ else
+ vortex_set_duplex(dev);
+
+ issue_and_wait(dev, TxReset);
+ /*
+ * Don't reset the PHY - that upsets autonegotiation during DHCP operations.
+ */
+ issue_and_wait(dev, RxReset|0x04);
+
+
+ iowrite16(SetStatusEnb | 0x00, ioaddr + EL3_CMD);
+
+ if (vortex_debug > 1) {
+ pr_debug("%s: vortex_up() irq %d media status %4.4x.\n",
+ dev->name, dev->irq, window_read16(vp, 4, Wn4_Media));
+ }
+
+ /* Set the station address and mask in window 2 each time opened. */
+ for (i = 0; i < 6; i++)
+ window_write8(vp, dev->dev_addr[i], 2, i);
+ for (; i < 12; i+=2)
+ window_write16(vp, 0, 2, i);
+
+ if (vp->cb_fn_base) {
+ unsigned short n = window_read16(vp, 2, Wn2_ResetOptions) & ~0x4010;
+ if (vp->drv_flags & INVERT_LED_PWR)
+ n |= 0x10;
+ if (vp->drv_flags & INVERT_MII_PWR)
+ n |= 0x4000;
+ window_write16(vp, n, 2, Wn2_ResetOptions);
+ }
+
+ if (dev->if_port == XCVR_10base2)
+ /* Start the thinnet transceiver. We should really wait 50ms...*/
+ iowrite16(StartCoax, ioaddr + EL3_CMD);
+ if (dev->if_port != XCVR_NWAY) {
+ window_write16(vp,
+ (window_read16(vp, 4, Wn4_Media) &
+ ~(Media_10TP|Media_SQE)) |
+ media_tbl[dev->if_port].media_bits,
+ 4, Wn4_Media);
+ }
+
+ /* Switch to the stats window, and clear all stats by reading. */
+ iowrite16(StatsDisable, ioaddr + EL3_CMD);
+ for (i = 0; i < 10; i++)
+ window_read8(vp, 6, i);
+ window_read16(vp, 6, 10);
+ window_read16(vp, 6, 12);
+ /* New: On the Vortex we must also clear the BadSSD counter. */
+ window_read8(vp, 4, 12);
+ /* ..and on the Boomerang we enable the extra statistics bits. */
+ window_write16(vp, 0x0040, 4, Wn4_NetDiag);
+
+ if (vp->full_bus_master_rx) { /* Boomerang bus master. */
+ vp->cur_rx = vp->dirty_rx = 0;
+ /* Initialize the RxEarly register as recommended. */
+ iowrite16(SetRxThreshold + (1536>>2), ioaddr + EL3_CMD);
+ iowrite32(0x0020, ioaddr + PktStatus);
+ iowrite32(vp->rx_ring_dma, ioaddr + UpListPtr);
+ }
+ if (vp->full_bus_master_tx) { /* Boomerang bus master Tx. */
+ vp->cur_tx = vp->dirty_tx = 0;
+ if (vp->drv_flags & IS_BOOMERANG)
+ iowrite8(PKT_BUF_SZ>>8, ioaddr + TxFreeThreshold); /* Room for a packet. */
+ /* Clear the Rx, Tx rings. */
+ for (i = 0; i < RX_RING_SIZE; i++) /* AKPM: this is done in vortex_open, too */
+ vp->rx_ring[i].status = 0;
+ for (i = 0; i < TX_RING_SIZE; i++)
+ vp->tx_skbuff[i] = NULL;
+ iowrite32(0, ioaddr + DownListPtr);
+ }
+ /* Set receiver mode: presumably accept b-case and phys addr only. */
+ set_rx_mode(dev);
+ /* enable 802.1q tagged frames */
+ set_8021q_mode(dev, 1);
+ iowrite16(StatsEnable, ioaddr + EL3_CMD); /* Turn on statistics. */
+
+ iowrite16(RxEnable, ioaddr + EL3_CMD); /* Enable the receiver. */
+ iowrite16(TxEnable, ioaddr + EL3_CMD); /* Enable transmitter. */
+ /* Allow status bits to be seen. */
+ vp->status_enable = SetStatusEnb | HostError|IntReq|StatsFull|TxComplete|
+ (vp->full_bus_master_tx ? DownComplete : TxAvailable) |
+ (vp->full_bus_master_rx ? UpComplete : RxComplete) |
+ (vp->bus_master ? DMADone : 0);
+ vp->intr_enable = SetIntrEnb | IntLatch | TxAvailable |
+ (vp->full_bus_master_rx ? 0 : RxComplete) |
+ StatsFull | HostError | TxComplete | IntReq
+ | (vp->bus_master ? DMADone : 0) | UpComplete | DownComplete;
+ iowrite16(vp->status_enable, ioaddr + EL3_CMD);
+ /* Ack all pending events, and set active indicator mask. */
+ iowrite16(AckIntr | IntLatch | TxAvailable | RxEarly | IntReq,
+ ioaddr + EL3_CMD);
+ iowrite16(vp->intr_enable, ioaddr + EL3_CMD);
+ if (vp->cb_fn_base) /* The PCMCIA people are idiots. */
+ iowrite32(0x8000, vp->cb_fn_base + 4);
+ netif_start_queue (dev);
+err_out:
+ return err;
+}
+
+static int
+vortex_open(struct net_device *dev)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ int i;
+ int retval;
+
+ /* Use the now-standard shared IRQ implementation. */
+ if ((retval = request_irq(dev->irq, vp->full_bus_master_rx ?
+ boomerang_interrupt : vortex_interrupt, IRQF_SHARED, dev->name, dev))) {
+ pr_err("%s: Could not reserve IRQ %d\n", dev->name, dev->irq);
+ goto err;
+ }
+
+ if (vp->full_bus_master_rx) { /* Boomerang bus master. */
+ if (vortex_debug > 2)
+ pr_debug("%s: Filling in the Rx ring.\n", dev->name);
+ for (i = 0; i < RX_RING_SIZE; i++) {
+ struct sk_buff *skb;
+ vp->rx_ring[i].next = cpu_to_le32(vp->rx_ring_dma + sizeof(struct boom_rx_desc) * (i+1));
+ vp->rx_ring[i].status = 0; /* Clear complete bit. */
+ vp->rx_ring[i].length = cpu_to_le32(PKT_BUF_SZ | LAST_FRAG);
+
+ skb = __netdev_alloc_skb(dev, PKT_BUF_SZ + NET_IP_ALIGN,
+ GFP_KERNEL);
+ vp->rx_skbuff[i] = skb;
+ if (skb == NULL)
+ break; /* Bad news! */
+
+ skb_reserve(skb, NET_IP_ALIGN); /* Align IP on 16 byte boundaries */
+ vp->rx_ring[i].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, PKT_BUF_SZ, PCI_DMA_FROMDEVICE));
+ }
+ if (i != RX_RING_SIZE) {
+ int j;
+ pr_emerg("%s: no memory for rx ring\n", dev->name);
+ for (j = 0; j < i; j++) {
+ if (vp->rx_skbuff[j]) {
+ dev_kfree_skb(vp->rx_skbuff[j]);
+ vp->rx_skbuff[j] = NULL;
+ }
+ }
+ retval = -ENOMEM;
+ goto err_free_irq;
+ }
+ /* Wrap the ring. */
+ vp->rx_ring[i-1].next = cpu_to_le32(vp->rx_ring_dma);
+ }
+
+ retval = vortex_up(dev);
+ if (!retval)
+ goto out;
+
+err_free_irq:
+ free_irq(dev->irq, dev);
+err:
+ if (vortex_debug > 1)
+ pr_err("%s: vortex_open() fails: returning %d\n", dev->name, retval);
+out:
+ return retval;
+}
+
+static void
+vortex_timer(unsigned long data)
+{
+ struct net_device *dev = (struct net_device *)data;
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr = vp->ioaddr;
+ int next_tick = 60*HZ;
+ int ok = 0;
+ int media_status;
+
+ if (vortex_debug > 2) {
+ pr_debug("%s: Media selection timer tick happened, %s.\n",
+ dev->name, media_tbl[dev->if_port].name);
+ pr_debug("dev->watchdog_timeo=%d\n", dev->watchdog_timeo);
+ }
+
+ media_status = window_read16(vp, 4, Wn4_Media);
+ switch (dev->if_port) {
+ case XCVR_10baseT: case XCVR_100baseTx: case XCVR_100baseFx:
+ if (media_status & Media_LnkBeat) {
+ netif_carrier_on(dev);
+ ok = 1;
+ if (vortex_debug > 1)
+ pr_debug("%s: Media %s has link beat, %x.\n",
+ dev->name, media_tbl[dev->if_port].name, media_status);
+ } else {
+ netif_carrier_off(dev);
+ if (vortex_debug > 1) {
+ pr_debug("%s: Media %s has no link beat, %x.\n",
+ dev->name, media_tbl[dev->if_port].name, media_status);
+ }
+ }
+ break;
+ case XCVR_MII: case XCVR_NWAY:
+ {
+ ok = 1;
+ vortex_check_media(dev, 0);
+ }
+ break;
+ default: /* Other media types handled by Tx timeouts. */
+ if (vortex_debug > 1)
+ pr_debug("%s: Media %s has no indication, %x.\n",
+ dev->name, media_tbl[dev->if_port].name, media_status);
+ ok = 1;
+ }
+
+ if (!netif_carrier_ok(dev))
+ next_tick = 5*HZ;
+
+ if (vp->medialock)
+ goto leave_media_alone;
+
+ if (!ok) {
+ unsigned int config;
+
+ spin_lock_irq(&vp->lock);
+
+ do {
+ dev->if_port = media_tbl[dev->if_port].next;
+ } while ( ! (vp->available_media & media_tbl[dev->if_port].mask));
+ if (dev->if_port == XCVR_Default) { /* Go back to default. */
+ dev->if_port = vp->default_media;
+ if (vortex_debug > 1)
+ pr_debug("%s: Media selection failing, using default %s port.\n",
+ dev->name, media_tbl[dev->if_port].name);
+ } else {
+ if (vortex_debug > 1)
+ pr_debug("%s: Media selection failed, now trying %s port.\n",
+ dev->name, media_tbl[dev->if_port].name);
+ next_tick = media_tbl[dev->if_port].wait;
+ }
+ window_write16(vp,
+ (media_status & ~(Media_10TP|Media_SQE)) |
+ media_tbl[dev->if_port].media_bits,
+ 4, Wn4_Media);
+
+ config = window_read32(vp, 3, Wn3_Config);
+ config = BFINS(config, dev->if_port, 20, 4);
+ window_write32(vp, config, 3, Wn3_Config);
+
+ iowrite16(dev->if_port == XCVR_10base2 ? StartCoax : StopCoax,
+ ioaddr + EL3_CMD);
+ if (vortex_debug > 1)
+ pr_debug("wrote 0x%08x to Wn3_Config\n", config);
+ /* AKPM: FIXME: Should reset Rx & Tx here. P60 of 3c90xc.pdf */
+
+ spin_unlock_irq(&vp->lock);
+ }
+
+leave_media_alone:
+ if (vortex_debug > 2)
+ pr_debug("%s: Media selection timer finished, %s.\n",
+ dev->name, media_tbl[dev->if_port].name);
+
+ mod_timer(&vp->timer, RUN_AT(next_tick));
+ if (vp->deferred)
+ iowrite16(FakeIntr, ioaddr + EL3_CMD);
+}
+
+static void vortex_tx_timeout(struct net_device *dev)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr = vp->ioaddr;
+
+ pr_err("%s: transmit timed out, tx_status %2.2x status %4.4x.\n",
+ dev->name, ioread8(ioaddr + TxStatus),
+ ioread16(ioaddr + EL3_STATUS));
+ pr_err(" diagnostics: net %04x media %04x dma %08x fifo %04x\n",
+ window_read16(vp, 4, Wn4_NetDiag),
+ window_read16(vp, 4, Wn4_Media),
+ ioread32(ioaddr + PktStatus),
+ window_read16(vp, 4, Wn4_FIFODiag));
+ /* Slight code bloat to be user friendly. */
+ if ((ioread8(ioaddr + TxStatus) & 0x88) == 0x88)
+ pr_err("%s: Transmitter encountered 16 collisions --"
+ " network cable problem?\n", dev->name);
+ if (ioread16(ioaddr + EL3_STATUS) & IntLatch) {
+ pr_err("%s: Interrupt posted but not delivered --"
+ " IRQ blocked by another device?\n", dev->name);
+ /* Bad idea here.. but we might as well handle a few events. */
+ {
+ /*
+ * Block interrupts because vortex_interrupt does a bare spin_lock()
+ */
+ unsigned long flags;
+ local_irq_save(flags);
+ if (vp->full_bus_master_tx)
+ boomerang_interrupt(dev->irq, dev);
+ else
+ vortex_interrupt(dev->irq, dev);
+ local_irq_restore(flags);
+ }
+ }
+
+ if (vortex_debug > 0)
+ dump_tx_ring(dev);
+
+ issue_and_wait(dev, TxReset);
+
+ dev->stats.tx_errors++;
+ if (vp->full_bus_master_tx) {
+ pr_debug("%s: Resetting the Tx ring pointer.\n", dev->name);
+ if (vp->cur_tx - vp->dirty_tx > 0 && ioread32(ioaddr + DownListPtr) == 0)
+ iowrite32(vp->tx_ring_dma + (vp->dirty_tx % TX_RING_SIZE) * sizeof(struct boom_tx_desc),
+ ioaddr + DownListPtr);
+ if (vp->cur_tx - vp->dirty_tx < TX_RING_SIZE)
+ netif_wake_queue (dev);
+ if (vp->drv_flags & IS_BOOMERANG)
+ iowrite8(PKT_BUF_SZ>>8, ioaddr + TxFreeThreshold);
+ iowrite16(DownUnstall, ioaddr + EL3_CMD);
+ } else {
+ dev->stats.tx_dropped++;
+ netif_wake_queue(dev);
+ }
+
+ /* Issue Tx Enable */
+ iowrite16(TxEnable, ioaddr + EL3_CMD);
+ dev->trans_start = jiffies; /* prevent tx timeout */
+}
+
+/*
+ * Handle uncommon interrupt sources. This is a separate routine to minimize
+ * the cache impact.
+ */
+static void
+vortex_error(struct net_device *dev, int status)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr = vp->ioaddr;
+ int do_tx_reset = 0, reset_mask = 0;
+ unsigned char tx_status = 0;
+
+ if (vortex_debug > 2) {
+ pr_err("%s: vortex_error(), status=0x%x\n", dev->name, status);
+ }
+
+ if (status & TxComplete) { /* Really "TxError" for us. */
+ tx_status = ioread8(ioaddr + TxStatus);
+ /* Presumably a tx-timeout. We must merely re-enable. */
+ if (vortex_debug > 2 ||
+ (tx_status != 0x88 && vortex_debug > 0)) {
+ pr_err("%s: Transmit error, Tx status register %2.2x.\n",
+ dev->name, tx_status);
+ if (tx_status == 0x82) {
+ pr_err("Probably a duplex mismatch. See "
+ "Documentation/networking/vortex.txt\n");
+ }
+ dump_tx_ring(dev);
+ }
+ if (tx_status & 0x14) dev->stats.tx_fifo_errors++;
+ if (tx_status & 0x38) dev->stats.tx_aborted_errors++;
+ if (tx_status & 0x08) vp->xstats.tx_max_collisions++;
+ iowrite8(0, ioaddr + TxStatus);
+ if (tx_status & 0x30) { /* txJabber or txUnderrun */
+ do_tx_reset = 1;
+ } else if ((tx_status & 0x08) && (vp->drv_flags & MAX_COLLISION_RESET)) { /* maxCollisions */
+ do_tx_reset = 1;
+ reset_mask = 0x0108; /* Reset interface logic, but not download logic */
+ } else { /* Merely re-enable the transmitter. */
+ iowrite16(TxEnable, ioaddr + EL3_CMD);
+ }
+ }
+
+ if (status & RxEarly) /* Rx early is unused. */
+ iowrite16(AckIntr | RxEarly, ioaddr + EL3_CMD);
+
+ if (status & StatsFull) { /* Empty statistics. */
+ static int DoneDidThat;
+ if (vortex_debug > 4)
+ pr_debug("%s: Updating stats.\n", dev->name);
+ update_stats(ioaddr, dev);
+ /* HACK: Disable statistics as an interrupt source. */
+ /* This occurs when we have the wrong media type! */
+ if (DoneDidThat == 0 &&
+ ioread16(ioaddr + EL3_STATUS) & StatsFull) {
+ pr_warning("%s: Updating statistics failed, disabling "
+ "stats as an interrupt source.\n", dev->name);
+ iowrite16(SetIntrEnb |
+ (window_read16(vp, 5, 10) & ~StatsFull),
+ ioaddr + EL3_CMD);
+ vp->intr_enable &= ~StatsFull;
+ DoneDidThat++;
+ }
+ }
+ if (status & IntReq) { /* Restore all interrupt sources. */
+ iowrite16(vp->status_enable, ioaddr + EL3_CMD);
+ iowrite16(vp->intr_enable, ioaddr + EL3_CMD);
+ }
+ if (status & HostError) {
+ u16 fifo_diag;
+ fifo_diag = window_read16(vp, 4, Wn4_FIFODiag);
+ pr_err("%s: Host error, FIFO diagnostic register %4.4x.\n",
+ dev->name, fifo_diag);
+ /* Adapter failure requires Tx/Rx reset and reinit. */
+ if (vp->full_bus_master_tx) {
+ int bus_status = ioread32(ioaddr + PktStatus);
+ /* 0x80000000 PCI master abort. */
+ /* 0x40000000 PCI target abort. */
+ if (vortex_debug)
+ pr_err("%s: PCI bus error, bus status %8.8x\n", dev->name, bus_status);
+
+ /* In this case, blow the card away */
+ /* Must not enter D3 or we can't legally issue the reset! */
+ vortex_down(dev, 0);
+ issue_and_wait(dev, TotalReset | 0xff);
+ vortex_up(dev); /* AKPM: bug. vortex_up() assumes that the rx ring is full. It may not be. */
+ } else if (fifo_diag & 0x0400)
+ do_tx_reset = 1;
+ if (fifo_diag & 0x3000) {
+ /* Reset Rx fifo and upload logic */
+ issue_and_wait(dev, RxReset|0x07);
+ /* Set the Rx filter to the current state. */
+ set_rx_mode(dev);
+ /* enable 802.1q VLAN tagged frames */
+ set_8021q_mode(dev, 1);
+ iowrite16(RxEnable, ioaddr + EL3_CMD); /* Re-enable the receiver. */
+ iowrite16(AckIntr | HostError, ioaddr + EL3_CMD);
+ }
+ }
+
+ if (do_tx_reset) {
+ issue_and_wait(dev, TxReset|reset_mask);
+ iowrite16(TxEnable, ioaddr + EL3_CMD);
+ if (!vp->full_bus_master_tx)
+ netif_wake_queue(dev);
+ }
+}
+
+static netdev_tx_t
+vortex_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr = vp->ioaddr;
+
+ /* Put out the doubleword header... */
+ iowrite32(skb->len, ioaddr + TX_FIFO);
+ if (vp->bus_master) {
+ /* Set the bus-master controller to transfer the packet. */
+ int len = (skb->len + 3) & ~3;
+ vp->tx_skb_dma = pci_map_single(VORTEX_PCI(vp), skb->data, len,
+ PCI_DMA_TODEVICE);
+ spin_lock_irq(&vp->window_lock);
+ window_set(vp, 7);
+ iowrite32(vp->tx_skb_dma, ioaddr + Wn7_MasterAddr);
+ iowrite16(len, ioaddr + Wn7_MasterLen);
+ spin_unlock_irq(&vp->window_lock);
+ vp->tx_skb = skb;
+ iowrite16(StartDMADown, ioaddr + EL3_CMD);
+ /* netif_wake_queue() will be called at the DMADone interrupt. */
+ } else {
+ /* ... and the packet rounded to a doubleword. */
+ iowrite32_rep(ioaddr + TX_FIFO, skb->data, (skb->len + 3) >> 2);
+ dev_kfree_skb (skb);
+ if (ioread16(ioaddr + TxFree) > 1536) {
+ netif_start_queue (dev); /* AKPM: redundant? */
+ } else {
+ /* Interrupt us when the FIFO has room for max-sized packet. */
+ netif_stop_queue(dev);
+ iowrite16(SetTxThreshold + (1536>>2), ioaddr + EL3_CMD);
+ }
+ }
+
+
+ /* Clear the Tx status stack. */
+ {
+ int tx_status;
+ int i = 32;
+
+ while (--i > 0 && (tx_status = ioread8(ioaddr + TxStatus)) > 0) {
+ if (tx_status & 0x3C) { /* A Tx-disabling error occurred. */
+ if (vortex_debug > 2)
+ pr_debug("%s: Tx error, status %2.2x.\n",
+ dev->name, tx_status);
+ if (tx_status & 0x04) dev->stats.tx_fifo_errors++;
+ if (tx_status & 0x38) dev->stats.tx_aborted_errors++;
+ if (tx_status & 0x30) {
+ issue_and_wait(dev, TxReset);
+ }
+ iowrite16(TxEnable, ioaddr + EL3_CMD);
+ }
+ iowrite8(0x00, ioaddr + TxStatus); /* Pop the status stack. */
+ }
+ }
+ return NETDEV_TX_OK;
+}
+
+static netdev_tx_t
+boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr = vp->ioaddr;
+ /* Calculate the next Tx descriptor entry. */
+ int entry = vp->cur_tx % TX_RING_SIZE;
+ struct boom_tx_desc *prev_entry = &vp->tx_ring[(vp->cur_tx-1) % TX_RING_SIZE];
+ unsigned long flags;
+
+ if (vortex_debug > 6) {
+ pr_debug("boomerang_start_xmit()\n");
+ pr_debug("%s: Trying to send a packet, Tx index %d.\n",
+ dev->name, vp->cur_tx);
+ }
+
+ /*
+ * We can't allow a recursion from our interrupt handler back into the
+ * tx routine, as they take the same spin lock, and that causes
+ * deadlock. Just return NETDEV_TX_BUSY and let the stack try again in
+ * a bit
+ */
+ if (vp->handling_irq)
+ return NETDEV_TX_BUSY;
+
+ if (vp->cur_tx - vp->dirty_tx >= TX_RING_SIZE) {
+ if (vortex_debug > 0)
+ pr_warning("%s: BUG! Tx Ring full, refusing to send buffer.\n",
+ dev->name);
+ netif_stop_queue(dev);
+ return NETDEV_TX_BUSY;
+ }
+
+ vp->tx_skbuff[entry] = skb;
+
+ vp->tx_ring[entry].next = 0;
+#if DO_ZEROCOPY
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded);
+ else
+ vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded | AddTCPChksum | AddUDPChksum);
+
+ if (!skb_shinfo(skb)->nr_frags) {
+ vp->tx_ring[entry].frag[0].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data,
+ skb->len, PCI_DMA_TODEVICE));
+ vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb->len | LAST_FRAG);
+ } else {
+ int i;
+
+ vp->tx_ring[entry].frag[0].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data,
+ skb_headlen(skb), PCI_DMA_TODEVICE));
+ vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb_headlen(skb));
+
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+
+ vp->tx_ring[entry].frag[i+1].addr =
+ cpu_to_le32(pci_map_single(VORTEX_PCI(vp),
+ (void*)page_address(frag->page) + frag->page_offset,
+ frag->size, PCI_DMA_TODEVICE));
+
+ if (i == skb_shinfo(skb)->nr_frags-1)
+ vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(frag->size|LAST_FRAG);
+ else
+ vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(frag->size);
+ }
+ }
+#else
+ vp->tx_ring[entry].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, skb->len, PCI_DMA_TODEVICE));
+ vp->tx_ring[entry].length = cpu_to_le32(skb->len | LAST_FRAG);
+ vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded);
+#endif
+
+ spin_lock_irqsave(&vp->lock, flags);
+ /* Wait for the stall to complete. */
+ issue_and_wait(dev, DownStall);
+ prev_entry->next = cpu_to_le32(vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc));
+ if (ioread32(ioaddr + DownListPtr) == 0) {
+ iowrite32(vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc), ioaddr + DownListPtr);
+ vp->queued_packet++;
+ }
+
+ vp->cur_tx++;
+ if (vp->cur_tx - vp->dirty_tx > TX_RING_SIZE - 1) {
+ netif_stop_queue (dev);
+ } else { /* Clear previous interrupt enable. */
+#if defined(tx_interrupt_mitigation)
+ /* Dubious. If in boomeang_interrupt "faster" cyclone ifdef
+ * were selected, this would corrupt DN_COMPLETE. No?
+ */
+ prev_entry->status &= cpu_to_le32(~TxIntrUploaded);
+#endif
+ }
+ iowrite16(DownUnstall, ioaddr + EL3_CMD);
+ spin_unlock_irqrestore(&vp->lock, flags);
+ return NETDEV_TX_OK;
+}
+
+/* The interrupt handler does all of the Rx thread work and cleans up
+ after the Tx thread. */
+
+/*
+ * This is the ISR for the vortex series chips.
+ * full_bus_master_tx == 0 && full_bus_master_rx == 0
+ */
+
+static irqreturn_t
+vortex_interrupt(int irq, void *dev_id)
+{
+ struct net_device *dev = dev_id;
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr;
+ int status;
+ int work_done = max_interrupt_work;
+ int handled = 0;
+
+ ioaddr = vp->ioaddr;
+ spin_lock(&vp->lock);
+
+ status = ioread16(ioaddr + EL3_STATUS);
+
+ if (vortex_debug > 6)
+ pr_debug("vortex_interrupt(). status=0x%4x\n", status);
+
+ if ((status & IntLatch) == 0)
+ goto handler_exit; /* No interrupt: shared IRQs cause this */
+ handled = 1;
+
+ if (status & IntReq) {
+ status |= vp->deferred;
+ vp->deferred = 0;
+ }
+
+ if (status == 0xffff) /* h/w no longer present (hotplug)? */
+ goto handler_exit;
+
+ if (vortex_debug > 4)
+ pr_debug("%s: interrupt, status %4.4x, latency %d ticks.\n",
+ dev->name, status, ioread8(ioaddr + Timer));
+
+ spin_lock(&vp->window_lock);
+ window_set(vp, 7);
+
+ do {
+ if (vortex_debug > 5)
+ pr_debug("%s: In interrupt loop, status %4.4x.\n",
+ dev->name, status);
+ if (status & RxComplete)
+ vortex_rx(dev);
+
+ if (status & TxAvailable) {
+ if (vortex_debug > 5)
+ pr_debug(" TX room bit was handled.\n");
+ /* There's room in the FIFO for a full-sized packet. */
+ iowrite16(AckIntr | TxAvailable, ioaddr + EL3_CMD);
+ netif_wake_queue (dev);
+ }
+
+ if (status & DMADone) {
+ if (ioread16(ioaddr + Wn7_MasterStatus) & 0x1000) {
+ iowrite16(0x1000, ioaddr + Wn7_MasterStatus); /* Ack the event. */
+ pci_unmap_single(VORTEX_PCI(vp), vp->tx_skb_dma, (vp->tx_skb->len + 3) & ~3, PCI_DMA_TODEVICE);
+ dev_kfree_skb_irq(vp->tx_skb); /* Release the transferred buffer */
+ if (ioread16(ioaddr + TxFree) > 1536) {
+ /*
+ * AKPM: FIXME: I don't think we need this. If the queue was stopped due to
+ * insufficient FIFO room, the TxAvailable test will succeed and call
+ * netif_wake_queue()
+ */
+ netif_wake_queue(dev);
+ } else { /* Interrupt when FIFO has room for max-sized packet. */
+ iowrite16(SetTxThreshold + (1536>>2), ioaddr + EL3_CMD);
+ netif_stop_queue(dev);
+ }
+ }
+ }
+ /* Check for all uncommon interrupts at once. */
+ if (status & (HostError | RxEarly | StatsFull | TxComplete | IntReq)) {
+ if (status == 0xffff)
+ break;
+ if (status & RxEarly)
+ vortex_rx(dev);
+ spin_unlock(&vp->window_lock);
+ vortex_error(dev, status);
+ spin_lock(&vp->window_lock);
+ window_set(vp, 7);
+ }
+
+ if (--work_done < 0) {
+ pr_warning("%s: Too much work in interrupt, status %4.4x.\n",
+ dev->name, status);
+ /* Disable all pending interrupts. */
+ do {
+ vp->deferred |= status;
+ iowrite16(SetStatusEnb | (~vp->deferred & vp->status_enable),
+ ioaddr + EL3_CMD);
+ iowrite16(AckIntr | (vp->deferred & 0x7ff), ioaddr + EL3_CMD);
+ } while ((status = ioread16(ioaddr + EL3_CMD)) & IntLatch);
+ /* The timer will reenable interrupts. */
+ mod_timer(&vp->timer, jiffies + 1*HZ);
+ break;
+ }
+ /* Acknowledge the IRQ. */
+ iowrite16(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
+ } while ((status = ioread16(ioaddr + EL3_STATUS)) & (IntLatch | RxComplete));
+
+ spin_unlock(&vp->window_lock);
+
+ if (vortex_debug > 4)
+ pr_debug("%s: exiting interrupt, status %4.4x.\n",
+ dev->name, status);
+handler_exit:
+ spin_unlock(&vp->lock);
+ return IRQ_RETVAL(handled);
+}
+
+/*
+ * This is the ISR for the boomerang series chips.
+ * full_bus_master_tx == 1 && full_bus_master_rx == 1
+ */
+
+static irqreturn_t
+boomerang_interrupt(int irq, void *dev_id)
+{
+ struct net_device *dev = dev_id;
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr;
+ int status;
+ int work_done = max_interrupt_work;
+
+ ioaddr = vp->ioaddr;
+
+
+ /*
+ * It seems dopey to put the spinlock this early, but we could race against vortex_tx_timeout
+ * and boomerang_start_xmit
+ */
+ spin_lock(&vp->lock);
+ vp->handling_irq = 1;
+
+ status = ioread16(ioaddr + EL3_STATUS);
+
+ if (vortex_debug > 6)
+ pr_debug("boomerang_interrupt. status=0x%4x\n", status);
+
+ if ((status & IntLatch) == 0)
+ goto handler_exit; /* No interrupt: shared IRQs can cause this */
+
+ if (status == 0xffff) { /* h/w no longer present (hotplug)? */
+ if (vortex_debug > 1)
+ pr_debug("boomerang_interrupt(1): status = 0xffff\n");
+ goto handler_exit;
+ }
+
+ if (status & IntReq) {
+ status |= vp->deferred;
+ vp->deferred = 0;
+ }
+
+ if (vortex_debug > 4)
+ pr_debug("%s: interrupt, status %4.4x, latency %d ticks.\n",
+ dev->name, status, ioread8(ioaddr + Timer));
+ do {
+ if (vortex_debug > 5)
+ pr_debug("%s: In interrupt loop, status %4.4x.\n",
+ dev->name, status);
+ if (status & UpComplete) {
+ iowrite16(AckIntr | UpComplete, ioaddr + EL3_CMD);
+ if (vortex_debug > 5)
+ pr_debug("boomerang_interrupt->boomerang_rx\n");
+ boomerang_rx(dev);
+ }
+
+ if (status & DownComplete) {
+ unsigned int dirty_tx = vp->dirty_tx;
+
+ iowrite16(AckIntr | DownComplete, ioaddr + EL3_CMD);
+ while (vp->cur_tx - dirty_tx > 0) {
+ int entry = dirty_tx % TX_RING_SIZE;
+#if 1 /* AKPM: the latter is faster, but cyclone-only */
+ if (ioread32(ioaddr + DownListPtr) ==
+ vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc))
+ break; /* It still hasn't been processed. */
+#else
+ if ((vp->tx_ring[entry].status & DN_COMPLETE) == 0)
+ break; /* It still hasn't been processed. */
+#endif
+
+ if (vp->tx_skbuff[entry]) {
+ struct sk_buff *skb = vp->tx_skbuff[entry];
+#if DO_ZEROCOPY
+ int i;
+ for (i=0; i<=skb_shinfo(skb)->nr_frags; i++)
+ pci_unmap_single(VORTEX_PCI(vp),
+ le32_to_cpu(vp->tx_ring[entry].frag[i].addr),
+ le32_to_cpu(vp->tx_ring[entry].frag[i].length)&0xFFF,
+ PCI_DMA_TODEVICE);
+#else
+ pci_unmap_single(VORTEX_PCI(vp),
+ le32_to_cpu(vp->tx_ring[entry].addr), skb->len, PCI_DMA_TODEVICE);
+#endif
+ dev_kfree_skb_irq(skb);
+ vp->tx_skbuff[entry] = NULL;
+ } else {
+ pr_debug("boomerang_interrupt: no skb!\n");
+ }
+ /* dev->stats.tx_packets++; Counted below. */
+ dirty_tx++;
+ }
+ vp->dirty_tx = dirty_tx;
+ if (vp->cur_tx - dirty_tx <= TX_RING_SIZE - 1) {
+ if (vortex_debug > 6)
+ pr_debug("boomerang_interrupt: wake queue\n");
+ netif_wake_queue (dev);
+ }
+ }
+
+ /* Check for all uncommon interrupts at once. */
+ if (status & (HostError | RxEarly | StatsFull | TxComplete | IntReq))
+ vortex_error(dev, status);
+
+ if (--work_done < 0) {
+ pr_warning("%s: Too much work in interrupt, status %4.4x.\n",
+ dev->name, status);
+ /* Disable all pending interrupts. */
+ do {
+ vp->deferred |= status;
+ iowrite16(SetStatusEnb | (~vp->deferred & vp->status_enable),
+ ioaddr + EL3_CMD);
+ iowrite16(AckIntr | (vp->deferred & 0x7ff), ioaddr + EL3_CMD);
+ } while ((status = ioread16(ioaddr + EL3_CMD)) & IntLatch);
+ /* The timer will reenable interrupts. */
+ mod_timer(&vp->timer, jiffies + 1*HZ);
+ break;
+ }
+ /* Acknowledge the IRQ. */
+ iowrite16(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
+ if (vp->cb_fn_base) /* The PCMCIA people are idiots. */
+ iowrite32(0x8000, vp->cb_fn_base + 4);
+
+ } while ((status = ioread16(ioaddr + EL3_STATUS)) & IntLatch);
+
+ if (vortex_debug > 4)
+ pr_debug("%s: exiting interrupt, status %4.4x.\n",
+ dev->name, status);
+handler_exit:
+ vp->handling_irq = 0;
+ spin_unlock(&vp->lock);
+ return IRQ_HANDLED;
+}
+
+static int vortex_rx(struct net_device *dev)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr = vp->ioaddr;
+ int i;
+ short rx_status;
+
+ if (vortex_debug > 5)
+ pr_debug("vortex_rx(): status %4.4x, rx_status %4.4x.\n",
+ ioread16(ioaddr+EL3_STATUS), ioread16(ioaddr+RxStatus));
+ while ((rx_status = ioread16(ioaddr + RxStatus)) > 0) {
+ if (rx_status & 0x4000) { /* Error, update stats. */
+ unsigned char rx_error = ioread8(ioaddr + RxErrors);
+ if (vortex_debug > 2)
+ pr_debug(" Rx error: status %2.2x.\n", rx_error);
+ dev->stats.rx_errors++;
+ if (rx_error & 0x01) dev->stats.rx_over_errors++;
+ if (rx_error & 0x02) dev->stats.rx_length_errors++;
+ if (rx_error & 0x04) dev->stats.rx_frame_errors++;
+ if (rx_error & 0x08) dev->stats.rx_crc_errors++;
+ if (rx_error & 0x10) dev->stats.rx_length_errors++;
+ } else {
+ /* The packet length: up to 4.5K!. */
+ int pkt_len = rx_status & 0x1fff;
+ struct sk_buff *skb;
+
+ skb = dev_alloc_skb(pkt_len + 5);
+ if (vortex_debug > 4)
+ pr_debug("Receiving packet size %d status %4.4x.\n",
+ pkt_len, rx_status);
+ if (skb != NULL) {
+ skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
+ /* 'skb_put()' points to the start of sk_buff data area. */
+ if (vp->bus_master &&
+ ! (ioread16(ioaddr + Wn7_MasterStatus) & 0x8000)) {
+ dma_addr_t dma = pci_map_single(VORTEX_PCI(vp), skb_put(skb, pkt_len),
+ pkt_len, PCI_DMA_FROMDEVICE);
+ iowrite32(dma, ioaddr + Wn7_MasterAddr);
+ iowrite16((skb->len + 3) & ~3, ioaddr + Wn7_MasterLen);
+ iowrite16(StartDMAUp, ioaddr + EL3_CMD);
+ while (ioread16(ioaddr + Wn7_MasterStatus) & 0x8000)
+ ;
+ pci_unmap_single(VORTEX_PCI(vp), dma, pkt_len, PCI_DMA_FROMDEVICE);
+ } else {
+ ioread32_rep(ioaddr + RX_FIFO,
+ skb_put(skb, pkt_len),
+ (pkt_len + 3) >> 2);
+ }
+ iowrite16(RxDiscard, ioaddr + EL3_CMD); /* Pop top Rx packet. */
+ skb->protocol = eth_type_trans(skb, dev);
+ netif_rx(skb);
+ dev->stats.rx_packets++;
+ /* Wait a limited time to go to next packet. */
+ for (i = 200; i >= 0; i--)
+ if ( ! (ioread16(ioaddr + EL3_STATUS) & CmdInProgress))
+ break;
+ continue;
+ } else if (vortex_debug > 0)
+ pr_notice("%s: No memory to allocate a sk_buff of size %d.\n",
+ dev->name, pkt_len);
+ dev->stats.rx_dropped++;
+ }
+ issue_and_wait(dev, RxDiscard);
+ }
+
+ return 0;
+}
+
+static int
+boomerang_rx(struct net_device *dev)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ int entry = vp->cur_rx % RX_RING_SIZE;
+ void __iomem *ioaddr = vp->ioaddr;
+ int rx_status;
+ int rx_work_limit = vp->dirty_rx + RX_RING_SIZE - vp->cur_rx;
+
+ if (vortex_debug > 5)
+ pr_debug("boomerang_rx(): status %4.4x\n", ioread16(ioaddr+EL3_STATUS));
+
+ while ((rx_status = le32_to_cpu(vp->rx_ring[entry].status)) & RxDComplete){
+ if (--rx_work_limit < 0)
+ break;
+ if (rx_status & RxDError) { /* Error, update stats. */
+ unsigned char rx_error = rx_status >> 16;
+ if (vortex_debug > 2)
+ pr_debug(" Rx error: status %2.2x.\n", rx_error);
+ dev->stats.rx_errors++;
+ if (rx_error & 0x01) dev->stats.rx_over_errors++;
+ if (rx_error & 0x02) dev->stats.rx_length_errors++;
+ if (rx_error & 0x04) dev->stats.rx_frame_errors++;
+ if (rx_error & 0x08) dev->stats.rx_crc_errors++;
+ if (rx_error & 0x10) dev->stats.rx_length_errors++;
+ } else {
+ /* The packet length: up to 4.5K!. */
+ int pkt_len = rx_status & 0x1fff;
+ struct sk_buff *skb;
+ dma_addr_t dma = le32_to_cpu(vp->rx_ring[entry].addr);
+
+ if (vortex_debug > 4)
+ pr_debug("Receiving packet size %d status %4.4x.\n",
+ pkt_len, rx_status);
+
+ /* Check if the packet is long enough to just accept without
+ copying to a properly sized skbuff. */
+ if (pkt_len < rx_copybreak && (skb = dev_alloc_skb(pkt_len + 2)) != NULL) {
+ skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
+ pci_dma_sync_single_for_cpu(VORTEX_PCI(vp), dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
+ /* 'skb_put()' points to the start of sk_buff data area. */
+ memcpy(skb_put(skb, pkt_len),
+ vp->rx_skbuff[entry]->data,
+ pkt_len);
+ pci_dma_sync_single_for_device(VORTEX_PCI(vp), dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
+ vp->rx_copy++;
+ } else {
+ /* Pass up the skbuff already on the Rx ring. */
+ skb = vp->rx_skbuff[entry];
+ vp->rx_skbuff[entry] = NULL;
+ skb_put(skb, pkt_len);
+ pci_unmap_single(VORTEX_PCI(vp), dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
+ vp->rx_nocopy++;
+ }
+ skb->protocol = eth_type_trans(skb, dev);
+ { /* Use hardware checksum info. */
+ int csum_bits = rx_status & 0xee000000;
+ if (csum_bits &&
+ (csum_bits == (IPChksumValid | TCPChksumValid) ||
+ csum_bits == (IPChksumValid | UDPChksumValid))) {
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ vp->rx_csumhits++;
+ }
+ }
+ netif_rx(skb);
+ dev->stats.rx_packets++;
+ }
+ entry = (++vp->cur_rx) % RX_RING_SIZE;
+ }
+ /* Refill the Rx ring buffers. */
+ for (; vp->cur_rx - vp->dirty_rx > 0; vp->dirty_rx++) {
+ struct sk_buff *skb;
+ entry = vp->dirty_rx % RX_RING_SIZE;
+ if (vp->rx_skbuff[entry] == NULL) {
+ skb = netdev_alloc_skb_ip_align(dev, PKT_BUF_SZ);
+ if (skb == NULL) {
+ static unsigned long last_jif;
+ if (time_after(jiffies, last_jif + 10 * HZ)) {
+ pr_warning("%s: memory shortage\n", dev->name);
+ last_jif = jiffies;
+ }
+ if ((vp->cur_rx - vp->dirty_rx) == RX_RING_SIZE)
+ mod_timer(&vp->rx_oom_timer, RUN_AT(HZ * 1));
+ break; /* Bad news! */
+ }
+
+ vp->rx_ring[entry].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, PKT_BUF_SZ, PCI_DMA_FROMDEVICE));
+ vp->rx_skbuff[entry] = skb;
+ }
+ vp->rx_ring[entry].status = 0; /* Clear complete bit. */
+ iowrite16(UpUnstall, ioaddr + EL3_CMD);
+ }
+ return 0;
+}
+
+/*
+ * If we've hit a total OOM refilling the Rx ring we poll once a second
+ * for some memory. Otherwise there is no way to restart the rx process.
+ */
+static void
+rx_oom_timer(unsigned long arg)
+{
+ struct net_device *dev = (struct net_device *)arg;
+ struct vortex_private *vp = netdev_priv(dev);
+
+ spin_lock_irq(&vp->lock);
+ if ((vp->cur_rx - vp->dirty_rx) == RX_RING_SIZE) /* This test is redundant, but makes me feel good */
+ boomerang_rx(dev);
+ if (vortex_debug > 1) {
+ pr_debug("%s: rx_oom_timer %s\n", dev->name,
+ ((vp->cur_rx - vp->dirty_rx) != RX_RING_SIZE) ? "succeeded" : "retrying");
+ }
+ spin_unlock_irq(&vp->lock);
+}
+
+static void
+vortex_down(struct net_device *dev, int final_down)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr = vp->ioaddr;
+
+ netif_stop_queue (dev);
+
+ del_timer_sync(&vp->rx_oom_timer);
+ del_timer_sync(&vp->timer);
+
+ /* Turn off statistics ASAP. We update dev->stats below. */
+ iowrite16(StatsDisable, ioaddr + EL3_CMD);
+
+ /* Disable the receiver and transmitter. */
+ iowrite16(RxDisable, ioaddr + EL3_CMD);
+ iowrite16(TxDisable, ioaddr + EL3_CMD);
+
+ /* Disable receiving 802.1q tagged frames */
+ set_8021q_mode(dev, 0);
+
+ if (dev->if_port == XCVR_10base2)
+ /* Turn off thinnet power. Green! */
+ iowrite16(StopCoax, ioaddr + EL3_CMD);
+
+ iowrite16(SetIntrEnb | 0x0000, ioaddr + EL3_CMD);
+
+ update_stats(ioaddr, dev);
+ if (vp->full_bus_master_rx)
+ iowrite32(0, ioaddr + UpListPtr);
+ if (vp->full_bus_master_tx)
+ iowrite32(0, ioaddr + DownListPtr);
+
+ if (final_down && VORTEX_PCI(vp)) {
+ vp->pm_state_valid = 1;
+ pci_save_state(VORTEX_PCI(vp));
+ acpi_set_WOL(dev);
+ }
+}
+
+static int
+vortex_close(struct net_device *dev)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr = vp->ioaddr;
+ int i;
+
+ if (netif_device_present(dev))
+ vortex_down(dev, 1);
+
+ if (vortex_debug > 1) {
+ pr_debug("%s: vortex_close() status %4.4x, Tx status %2.2x.\n",
+ dev->name, ioread16(ioaddr + EL3_STATUS), ioread8(ioaddr + TxStatus));
+ pr_debug("%s: vortex close stats: rx_nocopy %d rx_copy %d"
+ " tx_queued %d Rx pre-checksummed %d.\n",
+ dev->name, vp->rx_nocopy, vp->rx_copy, vp->queued_packet, vp->rx_csumhits);
+ }
+
+#if DO_ZEROCOPY
+ if (vp->rx_csumhits &&
+ (vp->drv_flags & HAS_HWCKSM) == 0 &&
+ (vp->card_idx >= MAX_UNITS || hw_checksums[vp->card_idx] == -1)) {
+ pr_warning("%s supports hardware checksums, and we're not using them!\n", dev->name);
+ }
+#endif
+
+ free_irq(dev->irq, dev);
+
+ if (vp->full_bus_master_rx) { /* Free Boomerang bus master Rx buffers. */
+ for (i = 0; i < RX_RING_SIZE; i++)
+ if (vp->rx_skbuff[i]) {
+ pci_unmap_single( VORTEX_PCI(vp), le32_to_cpu(vp->rx_ring[i].addr),
+ PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
+ dev_kfree_skb(vp->rx_skbuff[i]);
+ vp->rx_skbuff[i] = NULL;
+ }
+ }
+ if (vp->full_bus_master_tx) { /* Free Boomerang bus master Tx buffers. */
+ for (i = 0; i < TX_RING_SIZE; i++) {
+ if (vp->tx_skbuff[i]) {
+ struct sk_buff *skb = vp->tx_skbuff[i];
+#if DO_ZEROCOPY
+ int k;
+
+ for (k=0; k<=skb_shinfo(skb)->nr_frags; k++)
+ pci_unmap_single(VORTEX_PCI(vp),
+ le32_to_cpu(vp->tx_ring[i].frag[k].addr),
+ le32_to_cpu(vp->tx_ring[i].frag[k].length)&0xFFF,
+ PCI_DMA_TODEVICE);
+#else
+ pci_unmap_single(VORTEX_PCI(vp), le32_to_cpu(vp->tx_ring[i].addr), skb->len, PCI_DMA_TODEVICE);
+#endif
+ dev_kfree_skb(skb);
+ vp->tx_skbuff[i] = NULL;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static void
+dump_tx_ring(struct net_device *dev)
+{
+ if (vortex_debug > 0) {
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr = vp->ioaddr;
+
+ if (vp->full_bus_master_tx) {
+ int i;
+ int stalled = ioread32(ioaddr + PktStatus) & 0x04; /* Possible racy. But it's only debug stuff */
+
+ pr_err(" Flags; bus-master %d, dirty %d(%d) current %d(%d)\n",
+ vp->full_bus_master_tx,
+ vp->dirty_tx, vp->dirty_tx % TX_RING_SIZE,
+ vp->cur_tx, vp->cur_tx % TX_RING_SIZE);
+ pr_err(" Transmit list %8.8x vs. %p.\n",
+ ioread32(ioaddr + DownListPtr),
+ &vp->tx_ring[vp->dirty_tx % TX_RING_SIZE]);
+ issue_and_wait(dev, DownStall);
+ for (i = 0; i < TX_RING_SIZE; i++) {
+ unsigned int length;
+
+#if DO_ZEROCOPY
+ length = le32_to_cpu(vp->tx_ring[i].frag[0].length);
+#else
+ length = le32_to_cpu(vp->tx_ring[i].length);
+#endif
+ pr_err(" %d: @%p length %8.8x status %8.8x\n",
+ i, &vp->tx_ring[i], length,
+ le32_to_cpu(vp->tx_ring[i].status));
+ }
+ if (!stalled)
+ iowrite16(DownUnstall, ioaddr + EL3_CMD);
+ }
+ }
+}
+
+static struct net_device_stats *vortex_get_stats(struct net_device *dev)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr = vp->ioaddr;
+ unsigned long flags;
+
+ if (netif_device_present(dev)) { /* AKPM: Used to be netif_running */
+ spin_lock_irqsave (&vp->lock, flags);
+ update_stats(ioaddr, dev);
+ spin_unlock_irqrestore (&vp->lock, flags);
+ }
+ return &dev->stats;
+}
+
+/* Update statistics.
+ Unlike with the EL3 we need not worry about interrupts changing
+ the window setting from underneath us, but we must still guard
+ against a race condition with a StatsUpdate interrupt updating the
+ table. This is done by checking that the ASM (!) code generated uses
+ atomic updates with '+='.
+ */
+static void update_stats(void __iomem *ioaddr, struct net_device *dev)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+
+ /* Unlike the 3c5x9 we need not turn off stats updates while reading. */
+ /* Switch to the stats window, and read everything. */
+ dev->stats.tx_carrier_errors += window_read8(vp, 6, 0);
+ dev->stats.tx_heartbeat_errors += window_read8(vp, 6, 1);
+ dev->stats.tx_window_errors += window_read8(vp, 6, 4);
+ dev->stats.rx_fifo_errors += window_read8(vp, 6, 5);
+ dev->stats.tx_packets += window_read8(vp, 6, 6);
+ dev->stats.tx_packets += (window_read8(vp, 6, 9) &
+ 0x30) << 4;
+ /* Rx packets */ window_read8(vp, 6, 7); /* Must read to clear */
+ /* Don't bother with register 9, an extension of registers 6&7.
+ If we do use the 6&7 values the atomic update assumption above
+ is invalid. */
+ dev->stats.rx_bytes += window_read16(vp, 6, 10);
+ dev->stats.tx_bytes += window_read16(vp, 6, 12);
+ /* Extra stats for get_ethtool_stats() */
+ vp->xstats.tx_multiple_collisions += window_read8(vp, 6, 2);
+ vp->xstats.tx_single_collisions += window_read8(vp, 6, 3);
+ vp->xstats.tx_deferred += window_read8(vp, 6, 8);
+ vp->xstats.rx_bad_ssd += window_read8(vp, 4, 12);
+
+ dev->stats.collisions = vp->xstats.tx_multiple_collisions
+ + vp->xstats.tx_single_collisions
+ + vp->xstats.tx_max_collisions;
+
+ {
+ u8 up = window_read8(vp, 4, 13);
+ dev->stats.rx_bytes += (up & 0x0f) << 16;
+ dev->stats.tx_bytes += (up & 0xf0) << 12;
+ }
+}
+
+static int vortex_nway_reset(struct net_device *dev)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+
+ return mii_nway_restart(&vp->mii);
+}
+
+static int vortex_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+
+ return mii_ethtool_gset(&vp->mii, cmd);
+}
+
+static int vortex_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+
+ return mii_ethtool_sset(&vp->mii, cmd);
+}
+
+static u32 vortex_get_msglevel(struct net_device *dev)
+{
+ return vortex_debug;
+}
+
+static void vortex_set_msglevel(struct net_device *dev, u32 dbg)
+{
+ vortex_debug = dbg;
+}
+
+static int vortex_get_sset_count(struct net_device *dev, int sset)
+{
+ switch (sset) {
+ case ETH_SS_STATS:
+ return VORTEX_NUM_STATS;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static void vortex_get_ethtool_stats(struct net_device *dev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr = vp->ioaddr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&vp->lock, flags);
+ update_stats(ioaddr, dev);
+ spin_unlock_irqrestore(&vp->lock, flags);
+
+ data[0] = vp->xstats.tx_deferred;
+ data[1] = vp->xstats.tx_max_collisions;
+ data[2] = vp->xstats.tx_multiple_collisions;
+ data[3] = vp->xstats.tx_single_collisions;
+ data[4] = vp->xstats.rx_bad_ssd;
+}
+
+
+static void vortex_get_strings(struct net_device *dev, u32 stringset, u8 *data)
+{
+ switch (stringset) {
+ case ETH_SS_STATS:
+ memcpy(data, ðtool_stats_keys, sizeof(ethtool_stats_keys));
+ break;
+ default:
+ WARN_ON(1);
+ break;
+ }
+}
+
+static void vortex_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+
+ strcpy(info->driver, DRV_NAME);
+ if (VORTEX_PCI(vp)) {
+ strcpy(info->bus_info, pci_name(VORTEX_PCI(vp)));
+ } else {
+ if (VORTEX_EISA(vp))
+ strcpy(info->bus_info, dev_name(vp->gendev));
+ else
+ sprintf(info->bus_info, "EISA 0x%lx %d",
+ dev->base_addr, dev->irq);
+ }
+}
+
+static void vortex_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+
+ if (!VORTEX_PCI(vp))
+ return;
+
+ wol->supported = WAKE_MAGIC;
+
+ wol->wolopts = 0;
+ if (vp->enable_wol)
+ wol->wolopts |= WAKE_MAGIC;
+}
+
+static int vortex_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+
+ if (!VORTEX_PCI(vp))
+ return -EOPNOTSUPP;
+
+ if (wol->wolopts & ~WAKE_MAGIC)
+ return -EINVAL;
+
+ if (wol->wolopts & WAKE_MAGIC)
+ vp->enable_wol = 1;
+ else
+ vp->enable_wol = 0;
+ acpi_set_WOL(dev);
+
+ return 0;
+}
+
+static const struct ethtool_ops vortex_ethtool_ops = {
+ .get_drvinfo = vortex_get_drvinfo,
+ .get_strings = vortex_get_strings,
+ .get_msglevel = vortex_get_msglevel,
+ .set_msglevel = vortex_set_msglevel,
+ .get_ethtool_stats = vortex_get_ethtool_stats,
+ .get_sset_count = vortex_get_sset_count,
+ .get_settings = vortex_get_settings,
+ .set_settings = vortex_set_settings,
+ .get_link = ethtool_op_get_link,
+ .nway_reset = vortex_nway_reset,
+ .get_wol = vortex_get_wol,
+ .set_wol = vortex_set_wol,
+};
+
+#ifdef CONFIG_PCI
+/*
+ * Must power the device up to do MDIO operations
+ */
+static int vortex_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+{
+ int err;
+ struct vortex_private *vp = netdev_priv(dev);
+ pci_power_t state = 0;
+
+ if(VORTEX_PCI(vp))
+ state = VORTEX_PCI(vp)->current_state;
+
+ /* The kernel core really should have pci_get_power_state() */
+
+ if(state != 0)
+ pci_set_power_state(VORTEX_PCI(vp), PCI_D0);
+ err = generic_mii_ioctl(&vp->mii, if_mii(rq), cmd, NULL);
+ if(state != 0)
+ pci_set_power_state(VORTEX_PCI(vp), state);
+
+ return err;
+}
+#endif
+
+
+/* Pre-Cyclone chips have no documented multicast filter, so the only
+ multicast setting is to receive all multicast frames. At least
+ the chip has a very clean way to set the mode, unlike many others. */
+static void set_rx_mode(struct net_device *dev)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr = vp->ioaddr;
+ int new_mode;
+
+ if (dev->flags & IFF_PROMISC) {
+ if (vortex_debug > 3)
+ pr_notice("%s: Setting promiscuous mode.\n", dev->name);
+ new_mode = SetRxFilter|RxStation|RxMulticast|RxBroadcast|RxProm;
+ } else if (!netdev_mc_empty(dev) || dev->flags & IFF_ALLMULTI) {
+ new_mode = SetRxFilter|RxStation|RxMulticast|RxBroadcast;
+ } else
+ new_mode = SetRxFilter | RxStation | RxBroadcast;
+
+ iowrite16(new_mode, ioaddr + EL3_CMD);
+}
+
+#if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)
+/* Setup the card so that it can receive frames with an 802.1q VLAN tag.
+ Note that this must be done after each RxReset due to some backwards
+ compatibility logic in the Cyclone and Tornado ASICs */
+
+/* The Ethernet Type used for 802.1q tagged frames */
+#define VLAN_ETHER_TYPE 0x8100
+
+static void set_8021q_mode(struct net_device *dev, int enable)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ int mac_ctrl;
+
+ if ((vp->drv_flags&IS_CYCLONE) || (vp->drv_flags&IS_TORNADO)) {
+ /* cyclone and tornado chipsets can recognize 802.1q
+ * tagged frames and treat them correctly */
+
+ int max_pkt_size = dev->mtu+14; /* MTU+Ethernet header */
+ if (enable)
+ max_pkt_size += 4; /* 802.1Q VLAN tag */
+
+ window_write16(vp, max_pkt_size, 3, Wn3_MaxPktSize);
+
+ /* set VlanEtherType to let the hardware checksumming
+ treat tagged frames correctly */
+ window_write16(vp, VLAN_ETHER_TYPE, 7, Wn7_VlanEtherType);
+ } else {
+ /* on older cards we have to enable large frames */
+
+ vp->large_frames = dev->mtu > 1500 || enable;
+
+ mac_ctrl = window_read16(vp, 3, Wn3_MAC_Ctrl);
+ if (vp->large_frames)
+ mac_ctrl |= 0x40;
+ else
+ mac_ctrl &= ~0x40;
+ window_write16(vp, mac_ctrl, 3, Wn3_MAC_Ctrl);
+ }
+}
+#else
+
+static void set_8021q_mode(struct net_device *dev, int enable)
+{
+}
+
+
+#endif
+
+/* MII transceiver control section.
+ Read and write the MII registers using software-generated serial
+ MDIO protocol. See the MII specifications or DP83840A data sheet
+ for details. */
+
+/* The maximum data clock rate is 2.5 Mhz. The minimum timing is usually
+ met by back-to-back PCI I/O cycles, but we insert a delay to avoid
+ "overclocking" issues. */
+static void mdio_delay(struct vortex_private *vp)
+{
+ window_read32(vp, 4, Wn4_PhysicalMgmt);
+}
+
+#define MDIO_SHIFT_CLK 0x01
+#define MDIO_DIR_WRITE 0x04
+#define MDIO_DATA_WRITE0 (0x00 | MDIO_DIR_WRITE)
+#define MDIO_DATA_WRITE1 (0x02 | MDIO_DIR_WRITE)
+#define MDIO_DATA_READ 0x02
+#define MDIO_ENB_IN 0x00
+
+/* Generate the preamble required for initial synchronization and
+ a few older transceivers. */
+static void mdio_sync(struct vortex_private *vp, int bits)
+{
+ /* Establish sync by sending at least 32 logic ones. */
+ while (-- bits >= 0) {
+ window_write16(vp, MDIO_DATA_WRITE1, 4, Wn4_PhysicalMgmt);
+ mdio_delay(vp);
+ window_write16(vp, MDIO_DATA_WRITE1 | MDIO_SHIFT_CLK,
+ 4, Wn4_PhysicalMgmt);
+ mdio_delay(vp);
+ }
+}
+
+static int mdio_read(struct net_device *dev, int phy_id, int location)
+{
+ int i;
+ struct vortex_private *vp = netdev_priv(dev);
+ int read_cmd = (0xf6 << 10) | (phy_id << 5) | location;
+ unsigned int retval = 0;
+
+ spin_lock_bh(&vp->mii_lock);
+
+ if (mii_preamble_required)
+ mdio_sync(vp, 32);
+
+ /* Shift the read command bits out. */
+ for (i = 14; i >= 0; i--) {
+ int dataval = (read_cmd&(1<<i)) ? MDIO_DATA_WRITE1 : MDIO_DATA_WRITE0;
+ window_write16(vp, dataval, 4, Wn4_PhysicalMgmt);
+ mdio_delay(vp);
+ window_write16(vp, dataval | MDIO_SHIFT_CLK,
+ 4, Wn4_PhysicalMgmt);
+ mdio_delay(vp);
+ }
+ /* Read the two transition, 16 data, and wire-idle bits. */
+ for (i = 19; i > 0; i--) {
+ window_write16(vp, MDIO_ENB_IN, 4, Wn4_PhysicalMgmt);
+ mdio_delay(vp);
+ retval = (retval << 1) |
+ ((window_read16(vp, 4, Wn4_PhysicalMgmt) &
+ MDIO_DATA_READ) ? 1 : 0);
+ window_write16(vp, MDIO_ENB_IN | MDIO_SHIFT_CLK,
+ 4, Wn4_PhysicalMgmt);
+ mdio_delay(vp);
+ }
+
+ spin_unlock_bh(&vp->mii_lock);
+
+ return retval & 0x20000 ? 0xffff : retval>>1 & 0xffff;
+}
+
+static void mdio_write(struct net_device *dev, int phy_id, int location, int value)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ int write_cmd = 0x50020000 | (phy_id << 23) | (location << 18) | value;
+ int i;
+
+ spin_lock_bh(&vp->mii_lock);
+
+ if (mii_preamble_required)
+ mdio_sync(vp, 32);
+
+ /* Shift the command bits out. */
+ for (i = 31; i >= 0; i--) {
+ int dataval = (write_cmd&(1<<i)) ? MDIO_DATA_WRITE1 : MDIO_DATA_WRITE0;
+ window_write16(vp, dataval, 4, Wn4_PhysicalMgmt);
+ mdio_delay(vp);
+ window_write16(vp, dataval | MDIO_SHIFT_CLK,
+ 4, Wn4_PhysicalMgmt);
+ mdio_delay(vp);
+ }
+ /* Leave the interface idle. */
+ for (i = 1; i >= 0; i--) {
+ window_write16(vp, MDIO_ENB_IN, 4, Wn4_PhysicalMgmt);
+ mdio_delay(vp);
+ window_write16(vp, MDIO_ENB_IN | MDIO_SHIFT_CLK,
+ 4, Wn4_PhysicalMgmt);
+ mdio_delay(vp);
+ }
+
+ spin_unlock_bh(&vp->mii_lock);
+}
+
+/* ACPI: Advanced Configuration and Power Interface. */
+/* Set Wake-On-LAN mode and put the board into D3 (power-down) state. */
+static void acpi_set_WOL(struct net_device *dev)
+{
+ struct vortex_private *vp = netdev_priv(dev);
+ void __iomem *ioaddr = vp->ioaddr;
+
+ device_set_wakeup_enable(vp->gendev, vp->enable_wol);
+
+ if (vp->enable_wol) {
+ /* Power up on: 1==Downloaded Filter, 2==Magic Packets, 4==Link Status. */
+ window_write16(vp, 2, 7, 0x0c);
+ /* The RxFilter must accept the WOL frames. */
+ iowrite16(SetRxFilter|RxStation|RxMulticast|RxBroadcast, ioaddr + EL3_CMD);
+ iowrite16(RxEnable, ioaddr + EL3_CMD);
+
+ if (pci_enable_wake(VORTEX_PCI(vp), PCI_D3hot, 1)) {
+ pr_info("%s: WOL not supported.\n", pci_name(VORTEX_PCI(vp)));
+
+ vp->enable_wol = 0;
+ return;
+ }
+
+ if (VORTEX_PCI(vp)->current_state < PCI_D3hot)
+ return;
+
+ /* Change the power state to D3; RxEnable doesn't take effect. */
+ pci_set_power_state(VORTEX_PCI(vp), PCI_D3hot);
+ }
+}
+
+
+static void __devexit vortex_remove_one(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct vortex_private *vp;
+
+ if (!dev) {
+ pr_err("vortex_remove_one called for Compaq device!\n");
+ BUG();
+ }
+
+ vp = netdev_priv(dev);
+
+ if (vp->cb_fn_base)
+ pci_iounmap(VORTEX_PCI(vp), vp->cb_fn_base);
+
+ unregister_netdev(dev);
+
+ if (VORTEX_PCI(vp)) {
+ pci_set_power_state(VORTEX_PCI(vp), PCI_D0); /* Go active */
+ if (vp->pm_state_valid)
+ pci_restore_state(VORTEX_PCI(vp));
+ pci_disable_device(VORTEX_PCI(vp));
+ }
+ /* Should really use issue_and_wait() here */
+ iowrite16(TotalReset | ((vp->drv_flags & EEPROM_RESET) ? 0x04 : 0x14),
+ vp->ioaddr + EL3_CMD);
+
+ pci_iounmap(VORTEX_PCI(vp), vp->ioaddr);
+
+ pci_free_consistent(pdev,
+ sizeof(struct boom_rx_desc) * RX_RING_SIZE
+ + sizeof(struct boom_tx_desc) * TX_RING_SIZE,
+ vp->rx_ring,
+ vp->rx_ring_dma);
+ if (vp->must_free_region)
+ release_region(dev->base_addr, vp->io_size);
+ free_netdev(dev);
+}
+
+
+static struct pci_driver vortex_driver = {
+ .name = "3c59x",
+ .probe = vortex_init_one,
+ .remove = __devexit_p(vortex_remove_one),
+ .id_table = vortex_pci_tbl,
+ .driver.pm = VORTEX_PM_OPS,
+};
+
+
+static int vortex_have_pci;
+static int vortex_have_eisa;
+
+
+static int __init vortex_init(void)
+{
+ int pci_rc, eisa_rc;
+
+ pci_rc = pci_register_driver(&vortex_driver);
+ eisa_rc = vortex_eisa_init();
+
+ if (pci_rc == 0)
+ vortex_have_pci = 1;
+ if (eisa_rc > 0)
+ vortex_have_eisa = 1;
+
+ return (vortex_have_pci + vortex_have_eisa) ? 0 : -ENODEV;
+}
+
+
+static void __exit vortex_eisa_cleanup(void)
+{
+ struct vortex_private *vp;
+ void __iomem *ioaddr;
+
+#ifdef CONFIG_EISA
+ /* Take care of the EISA devices */
+ eisa_driver_unregister(&vortex_eisa_driver);
+#endif
+
+ if (compaq_net_device) {
+ vp = netdev_priv(compaq_net_device);
+ ioaddr = ioport_map(compaq_net_device->base_addr,
+ VORTEX_TOTAL_SIZE);
+
+ unregister_netdev(compaq_net_device);
+ iowrite16(TotalReset, ioaddr + EL3_CMD);
+ release_region(compaq_net_device->base_addr,
+ VORTEX_TOTAL_SIZE);
+
+ free_netdev(compaq_net_device);
+ }
+}
+
+
+static void __exit vortex_cleanup(void)
+{
+ if (vortex_have_pci)
+ pci_unregister_driver(&vortex_driver);
+ if (vortex_have_eisa)
+ vortex_eisa_cleanup();
+}
+
+
+module_init(vortex_init);
+module_exit(vortex_cleanup);
--- /dev/null
+#
+# 3Com Ethernet device configuration
+#
+
+config NET_VENDOR_3COM
+ bool "3Com devices"
+ depends on ISA || EISA || MCA || PCI || PCMCIA
+ ---help---
+ If you have a network (Ethernet) card belonging to this class, say Y
+ and read the Ethernet-HOWTO, available from
+ <http://www.tldp.org/docs.html#howto>.
+
+ Note that the answer to this question doesn't directly affect the
+ kernel: saying N will just cause the configurator to skip all
+ the questions about 3Com cards. If you say Y, you will be asked for
+ your specific card in the following questions.
+
+if NET_VENDOR_3COM
+
+config EL1
+ tristate "3c501 \"EtherLink\" support"
+ depends on ISA
+ ---help---
+ If you have a network (Ethernet) card of this type, say Y and read
+ the Ethernet-HOWTO, available from
+ <http://www.tldp.org/docs.html#howto>. Also, consider buying a
+ new card, since the 3c501 is slow, broken, and obsolete: you will
+ have problems. Some people suggest to ping ("man ping") a nearby
+ machine every minute ("man cron") when using this card.
+
+ To compile this driver as a module, choose M here. The module
+ will be called 3c501.
+
+config EL3
+ tristate "3c509/3c529 (MCA)/3c579 \"EtherLink III\" support"
+ depends on (ISA || EISA || MCA)
+ ---help---
+ If you have a network (Ethernet) card belonging to the 3Com
+ EtherLinkIII series, say Y and read the Ethernet-HOWTO, available
+ from <http://www.tldp.org/docs.html#howto>.
+
+ If your card is not working you may need to use the DOS
+ setup disk to disable Plug & Play mode, and to select the default
+ media type.
+
+ To compile this driver as a module, choose M here. The module
+ will be called 3c509.
+
+config 3C515
+ tristate "3c515 ISA \"Fast EtherLink\""
+ depends on (ISA || EISA) && ISA_DMA_API
+ ---help---
+ If you have a 3Com ISA EtherLink XL "Corkscrew" 3c515 Fast Ethernet
+ network card, say Y and read the Ethernet-HOWTO, available from
+ <http://www.tldp.org/docs.html#howto>.
+
+ To compile this driver as a module, choose M here. The module
+ will be called 3c515.
+
+config PCMCIA_3C574
+ tristate "3Com 3c574 PCMCIA support"
+ depends on PCMCIA
+ ---help---
+ Say Y here if you intend to attach a 3Com 3c574 or compatible PCMCIA
+ (PC-card) Fast Ethernet card to your computer.
+
+ To compile this driver as a module, choose M here: the module will be
+ called 3c574_cs. If unsure, say N.
+
+config PCMCIA_3C589
+ tristate "3Com 3c589 PCMCIA support"
+ depends on PCMCIA
+ ---help---
+ Say Y here if you intend to attach a 3Com 3c589 or compatible PCMCIA
+ (PC-card) Ethernet card to your computer.
+
+ To compile this driver as a module, choose M here: the module will be
+ called 3c589_cs. If unsure, say N.
+
+config VORTEX
+ tristate "3c590/3c900 series (592/595/597) \"Vortex/Boomerang\" support"
+ depends on (PCI || EISA)
+ select MII
+ ---help---
+ This option enables driver support for a large number of 10Mbps and
+ 10/100Mbps EISA, PCI and PCMCIA 3Com network cards:
+
+ "Vortex" (Fast EtherLink 3c590/3c592/3c595/3c597) EISA and PCI
+ "Boomerang" (EtherLink XL 3c900 or 3c905) PCI
+ "Cyclone" (3c540/3c900/3c905/3c980/3c575/3c656) PCI and Cardbus
+ "Tornado" (3c905) PCI
+ "Hurricane" (3c555/3cSOHO) PCI
+
+ If you have such a card, say Y and read the Ethernet-HOWTO,
+ available from <http://www.tldp.org/docs.html#howto>. More
+ specific information is in
+ <file:Documentation/networking/vortex.txt> and in the comments at
+ the beginning of <file:drivers/net/3c59x.c>.
+
+ To compile this support as a module, choose M here.
+
+config TYPHOON
+ tristate "3cr990 series \"Typhoon\" support"
+ depends on PCI
+ select CRC32
+ ---help---
+ This option enables driver support for the 3cr990 series of cards:
+
+ 3C990-TX, 3CR990-TX-95, 3CR990-TX-97, 3CR990-FX-95, 3CR990-FX-97,
+ 3CR990SVR, 3CR990SVR95, 3CR990SVR97, 3CR990-FX-95 Server,
+ 3CR990-FX-97 Server, 3C990B-TX-M, 3C990BSVR
+
+ If you have a network (Ethernet) card of this type, say Y and read
+ the Ethernet-HOWTO, available from
+ <http://www.tldp.org/docs.html#howto>.
+
+ To compile this driver as a module, choose M here. The module
+ will be called typhoon.
+
+config ACENIC
+ tristate "Alteon AceNIC/3Com 3C985/NetGear GA620 Gigabit support"
+ depends on PCI
+ ---help---
+ Say Y here if you have an Alteon AceNIC, 3Com 3C985(B), NetGear
+ GA620, SGI Gigabit or Farallon PN9000-SX PCI Gigabit Ethernet
+ adapter. The driver allows for using the Jumbo Frame option (9000
+ bytes/frame) however it requires that your switches can handle this
+ as well. To enable Jumbo Frames, add `mtu 9000' to your ifconfig
+ line.
+
+ To compile this driver as a module, choose M here: the
+ module will be called acenic.
+
+config ACENIC_OMIT_TIGON_I
+ bool "Omit support for old Tigon I based AceNICs"
+ depends on ACENIC
+ ---help---
+ Say Y here if you only have Tigon II based AceNICs and want to leave
+ out support for the older Tigon I based cards which are no longer
+ being sold (ie. the original Alteon AceNIC and 3Com 3C985 (non B
+ version)). This will reduce the size of the driver object by
+ app. 100KB. If you are not sure whether your card is a Tigon I or a
+ Tigon II, say N here.
+
+ The safe and default value for this is N.
+
+endif # NET_VENDOR_3COM
--- /dev/null
+#
+# Makefile for the 3Com Ethernet device drivers
+#
+
+obj-$(CONFIG_EL1) += 3c501.o
+obj-$(CONFIG_EL3) += 3c509.o
+obj-$(CONFIG_3C515) += 3c515.o
+obj-$(CONFIG_PCMCIA_3C589) += 3c589_cs.o
+obj-$(CONFIG_PCMCIA_3C574) += 3c574_cs.o
+obj-$(CONFIG_VORTEX) += 3c59x.o
+obj-$(CONFIG_ACENIC) += acenic.o
+obj-$(CONFIG_TYPHOON) += typhoon.o
--- /dev/null
+/*
+ * acenic.c: Linux driver for the Alteon AceNIC Gigabit Ethernet card
+ * and other Tigon based cards.
+ *
+ * Copyright 1998-2002 by Jes Sorensen, <jes@trained-monkey.org>.
+ *
+ * Thanks to Alteon and 3Com for providing hardware and documentation
+ * enabling me to write this driver.
+ *
+ * A mailing list for discussing the use of this driver has been
+ * setup, please subscribe to the lists if you have any questions
+ * about the driver. Send mail to linux-acenic-help@sunsite.auc.dk to
+ * see how to subscribe.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * Additional credits:
+ * Pete Wyckoff <wyckoff@ca.sandia.gov>: Initial Linux/Alpha and trace
+ * dump support. The trace dump support has not been
+ * integrated yet however.
+ * Troy Benjegerdes: Big Endian (PPC) patches.
+ * Nate Stahl: Better out of memory handling and stats support.
+ * Aman Singla: Nasty race between interrupt handler and tx code dealing
+ * with 'testing the tx_ret_csm and setting tx_full'
+ * David S. Miller <davem@redhat.com>: conversion to new PCI dma mapping
+ * infrastructure and Sparc support
+ * Pierrick Pinasseau (CERN): For lending me an Ultra 5 to test the
+ * driver under Linux/Sparc64
+ * Matt Domsch <Matt_Domsch@dell.com>: Detect Alteon 1000baseT cards
+ * ETHTOOL_GDRVINFO support
+ * Chip Salzenberg <chip@valinux.com>: Fix race condition between tx
+ * handler and close() cleanup.
+ * Ken Aaker <kdaaker@rchland.vnet.ibm.com>: Correct check for whether
+ * memory mapped IO is enabled to
+ * make the driver work on RS/6000.
+ * Takayoshi Kouchi <kouchi@hpc.bs1.fc.nec.co.jp>: Identifying problem
+ * where the driver would disable
+ * bus master mode if it had to disable
+ * write and invalidate.
+ * Stephen Hack <stephen_hack@hp.com>: Fixed ace_set_mac_addr for little
+ * endian systems.
+ * Val Henson <vhenson@esscom.com>: Reset Jumbo skb producer and
+ * rx producer index when
+ * flushing the Jumbo ring.
+ * Hans Grobler <grobh@sun.ac.za>: Memory leak fixes in the
+ * driver init path.
+ * Grant Grundler <grundler@cup.hp.com>: PCI write posting fixes.
+ */
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/kernel.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/sockios.h>
+#include <linux/firmware.h>
+#include <linux/slab.h>
+#include <linux/prefetch.h>
+#include <linux/if_vlan.h>
+
+#ifdef SIOCETHTOOL
+#include <linux/ethtool.h>
+#endif
+
+#include <net/sock.h>
+#include <net/ip.h>
+
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/byteorder.h>
+#include <asm/uaccess.h>
+
+
+#define DRV_NAME "acenic"
+
+#undef INDEX_DEBUG
+
+#ifdef CONFIG_ACENIC_OMIT_TIGON_I
+#define ACE_IS_TIGON_I(ap) 0
+#define ACE_TX_RING_ENTRIES(ap) MAX_TX_RING_ENTRIES
+#else
+#define ACE_IS_TIGON_I(ap) (ap->version == 1)
+#define ACE_TX_RING_ENTRIES(ap) ap->tx_ring_entries
+#endif
+
+#ifndef PCI_VENDOR_ID_ALTEON
+#define PCI_VENDOR_ID_ALTEON 0x12ae
+#endif
+#ifndef PCI_DEVICE_ID_ALTEON_ACENIC_FIBRE
+#define PCI_DEVICE_ID_ALTEON_ACENIC_FIBRE 0x0001
+#define PCI_DEVICE_ID_ALTEON_ACENIC_COPPER 0x0002
+#endif
+#ifndef PCI_DEVICE_ID_3COM_3C985
+#define PCI_DEVICE_ID_3COM_3C985 0x0001
+#endif
+#ifndef PCI_VENDOR_ID_NETGEAR
+#define PCI_VENDOR_ID_NETGEAR 0x1385
+#define PCI_DEVICE_ID_NETGEAR_GA620 0x620a
+#endif
+#ifndef PCI_DEVICE_ID_NETGEAR_GA620T
+#define PCI_DEVICE_ID_NETGEAR_GA620T 0x630a
+#endif
+
+
+/*
+ * Farallon used the DEC vendor ID by mistake and they seem not
+ * to care - stinky!
+ */
+#ifndef PCI_DEVICE_ID_FARALLON_PN9000SX
+#define PCI_DEVICE_ID_FARALLON_PN9000SX 0x1a
+#endif
+#ifndef PCI_DEVICE_ID_FARALLON_PN9100T
+#define PCI_DEVICE_ID_FARALLON_PN9100T 0xfa
+#endif
+#ifndef PCI_VENDOR_ID_SGI
+#define PCI_VENDOR_ID_SGI 0x10a9
+#endif
+#ifndef PCI_DEVICE_ID_SGI_ACENIC
+#define PCI_DEVICE_ID_SGI_ACENIC 0x0009
+#endif
+
+static DEFINE_PCI_DEVICE_TABLE(acenic_pci_tbl) = {
+ { PCI_VENDOR_ID_ALTEON, PCI_DEVICE_ID_ALTEON_ACENIC_FIBRE,
+ PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, },
+ { PCI_VENDOR_ID_ALTEON, PCI_DEVICE_ID_ALTEON_ACENIC_COPPER,
+ PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, },
+ { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3C985,
+ PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, },
+ { PCI_VENDOR_ID_NETGEAR, PCI_DEVICE_ID_NETGEAR_GA620,
+ PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, },
+ { PCI_VENDOR_ID_NETGEAR, PCI_DEVICE_ID_NETGEAR_GA620T,
+ PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, },
+ /*
+ * Farallon used the DEC vendor ID on their cards incorrectly,
+ * then later Alteon's ID.
+ */
+ { PCI_VENDOR_ID_DEC, PCI_DEVICE_ID_FARALLON_PN9000SX,
+ PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, },
+ { PCI_VENDOR_ID_ALTEON, PCI_DEVICE_ID_FARALLON_PN9100T,
+ PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, },
+ { PCI_VENDOR_ID_SGI, PCI_DEVICE_ID_SGI_ACENIC,
+ PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, },
+ { }
+};
+MODULE_DEVICE_TABLE(pci, acenic_pci_tbl);
+
+#define ace_sync_irq(irq) synchronize_irq(irq)
+
+#ifndef offset_in_page
+#define offset_in_page(ptr) ((unsigned long)(ptr) & ~PAGE_MASK)
+#endif
+
+#define ACE_MAX_MOD_PARMS 8
+#define BOARD_IDX_STATIC 0
+#define BOARD_IDX_OVERFLOW -1
+
+#include "acenic.h"
+
+/*
+ * These must be defined before the firmware is included.
+ */
+#define MAX_TEXT_LEN 96*1024
+#define MAX_RODATA_LEN 8*1024
+#define MAX_DATA_LEN 2*1024
+
+#ifndef tigon2FwReleaseLocal
+#define tigon2FwReleaseLocal 0
+#endif
+
+/*
+ * This driver currently supports Tigon I and Tigon II based cards
+ * including the Alteon AceNIC, the 3Com 3C985[B] and NetGear
+ * GA620. The driver should also work on the SGI, DEC and Farallon
+ * versions of the card, however I have not been able to test that
+ * myself.
+ *
+ * This card is really neat, it supports receive hardware checksumming
+ * and jumbo frames (up to 9000 bytes) and does a lot of work in the
+ * firmware. Also the programming interface is quite neat, except for
+ * the parts dealing with the i2c eeprom on the card ;-)
+ *
+ * Using jumbo frames:
+ *
+ * To enable jumbo frames, simply specify an mtu between 1500 and 9000
+ * bytes to ifconfig. Jumbo frames can be enabled or disabled at any time
+ * by running `ifconfig eth<X> mtu <MTU>' with <X> being the Ethernet
+ * interface number and <MTU> being the MTU value.
+ *
+ * Module parameters:
+ *
+ * When compiled as a loadable module, the driver allows for a number
+ * of module parameters to be specified. The driver supports the
+ * following module parameters:
+ *
+ * trace=<val> - Firmware trace level. This requires special traced
+ * firmware to replace the firmware supplied with
+ * the driver - for debugging purposes only.
+ *
+ * link=<val> - Link state. Normally you want to use the default link
+ * parameters set by the driver. This can be used to
+ * override these in case your switch doesn't negotiate
+ * the link properly. Valid values are:
+ * 0x0001 - Force half duplex link.
+ * 0x0002 - Do not negotiate line speed with the other end.
+ * 0x0010 - 10Mbit/sec link.
+ * 0x0020 - 100Mbit/sec link.
+ * 0x0040 - 1000Mbit/sec link.
+ * 0x0100 - Do not negotiate flow control.
+ * 0x0200 - Enable RX flow control Y
+ * 0x0400 - Enable TX flow control Y (Tigon II NICs only).
+ * Default value is 0x0270, ie. enable link+flow
+ * control negotiation. Negotiating the highest
+ * possible link speed with RX flow control enabled.
+ *
+ * When disabling link speed negotiation, only one link
+ * speed is allowed to be specified!
+ *
+ * tx_coal_tick=<val> - number of coalescing clock ticks (us) allowed
+ * to wait for more packets to arive before
+ * interrupting the host, from the time the first
+ * packet arrives.
+ *
+ * rx_coal_tick=<val> - number of coalescing clock ticks (us) allowed
+ * to wait for more packets to arive in the transmit ring,
+ * before interrupting the host, after transmitting the
+ * first packet in the ring.
+ *
+ * max_tx_desc=<val> - maximum number of transmit descriptors
+ * (packets) transmitted before interrupting the host.
+ *
+ * max_rx_desc=<val> - maximum number of receive descriptors
+ * (packets) received before interrupting the host.
+ *
+ * tx_ratio=<val> - 7 bit value (0 - 63) specifying the split in 64th
+ * increments of the NIC's on board memory to be used for
+ * transmit and receive buffers. For the 1MB NIC app. 800KB
+ * is available, on the 1/2MB NIC app. 300KB is available.
+ * 68KB will always be available as a minimum for both
+ * directions. The default value is a 50/50 split.
+ * dis_pci_mem_inval=<val> - disable PCI memory write and invalidate
+ * operations, default (1) is to always disable this as
+ * that is what Alteon does on NT. I have not been able
+ * to measure any real performance differences with
+ * this on my systems. Set <val>=0 if you want to
+ * enable these operations.
+ *
+ * If you use more than one NIC, specify the parameters for the
+ * individual NICs with a comma, ie. trace=0,0x00001fff,0 you want to
+ * run tracing on NIC #2 but not on NIC #1 and #3.
+ *
+ * TODO:
+ *
+ * - Proper multicast support.
+ * - NIC dump support.
+ * - More tuning parameters.
+ *
+ * The mini ring is not used under Linux and I am not sure it makes sense
+ * to actually use it.
+ *
+ * New interrupt handler strategy:
+ *
+ * The old interrupt handler worked using the traditional method of
+ * replacing an skbuff with a new one when a packet arrives. However
+ * the rx rings do not need to contain a static number of buffer
+ * descriptors, thus it makes sense to move the memory allocation out
+ * of the main interrupt handler and do it in a bottom half handler
+ * and only allocate new buffers when the number of buffers in the
+ * ring is below a certain threshold. In order to avoid starving the
+ * NIC under heavy load it is however necessary to force allocation
+ * when hitting a minimum threshold. The strategy for alloction is as
+ * follows:
+ *
+ * RX_LOW_BUF_THRES - allocate buffers in the bottom half
+ * RX_PANIC_LOW_THRES - we are very low on buffers, allocate
+ * the buffers in the interrupt handler
+ * RX_RING_THRES - maximum number of buffers in the rx ring
+ * RX_MINI_THRES - maximum number of buffers in the mini ring
+ * RX_JUMBO_THRES - maximum number of buffers in the jumbo ring
+ *
+ * One advantagous side effect of this allocation approach is that the
+ * entire rx processing can be done without holding any spin lock
+ * since the rx rings and registers are totally independent of the tx
+ * ring and its registers. This of course includes the kmalloc's of
+ * new skb's. Thus start_xmit can run in parallel with rx processing
+ * and the memory allocation on SMP systems.
+ *
+ * Note that running the skb reallocation in a bottom half opens up
+ * another can of races which needs to be handled properly. In
+ * particular it can happen that the interrupt handler tries to run
+ * the reallocation while the bottom half is either running on another
+ * CPU or was interrupted on the same CPU. To get around this the
+ * driver uses bitops to prevent the reallocation routines from being
+ * reentered.
+ *
+ * TX handling can also be done without holding any spin lock, wheee
+ * this is fun! since tx_ret_csm is only written to by the interrupt
+ * handler. The case to be aware of is when shutting down the device
+ * and cleaning up where it is necessary to make sure that
+ * start_xmit() is not running while this is happening. Well DaveM
+ * informs me that this case is already protected against ... bye bye
+ * Mr. Spin Lock, it was nice to know you.
+ *
+ * TX interrupts are now partly disabled so the NIC will only generate
+ * TX interrupts for the number of coal ticks, not for the number of
+ * TX packets in the queue. This should reduce the number of TX only,
+ * ie. when no RX processing is done, interrupts seen.
+ */
+
+/*
+ * Threshold values for RX buffer allocation - the low water marks for
+ * when to start refilling the rings are set to 75% of the ring
+ * sizes. It seems to make sense to refill the rings entirely from the
+ * intrrupt handler once it gets below the panic threshold, that way
+ * we don't risk that the refilling is moved to another CPU when the
+ * one running the interrupt handler just got the slab code hot in its
+ * cache.
+ */
+#define RX_RING_SIZE 72
+#define RX_MINI_SIZE 64
+#define RX_JUMBO_SIZE 48
+
+#define RX_PANIC_STD_THRES 16
+#define RX_PANIC_STD_REFILL (3*RX_PANIC_STD_THRES)/2
+#define RX_LOW_STD_THRES (3*RX_RING_SIZE)/4
+#define RX_PANIC_MINI_THRES 12
+#define RX_PANIC_MINI_REFILL (3*RX_PANIC_MINI_THRES)/2
+#define RX_LOW_MINI_THRES (3*RX_MINI_SIZE)/4
+#define RX_PANIC_JUMBO_THRES 6
+#define RX_PANIC_JUMBO_REFILL (3*RX_PANIC_JUMBO_THRES)/2
+#define RX_LOW_JUMBO_THRES (3*RX_JUMBO_SIZE)/4
+
+
+/*
+ * Size of the mini ring entries, basically these just should be big
+ * enough to take TCP ACKs
+ */
+#define ACE_MINI_SIZE 100
+
+#define ACE_MINI_BUFSIZE ACE_MINI_SIZE
+#define ACE_STD_BUFSIZE (ACE_STD_MTU + ETH_HLEN + 4)
+#define ACE_JUMBO_BUFSIZE (ACE_JUMBO_MTU + ETH_HLEN + 4)
+
+/*
+ * There seems to be a magic difference in the effect between 995 and 996
+ * but little difference between 900 and 995 ... no idea why.
+ *
+ * There is now a default set of tuning parameters which is set, depending
+ * on whether or not the user enables Jumbo frames. It's assumed that if
+ * Jumbo frames are enabled, the user wants optimal tuning for that case.
+ */
+#define DEF_TX_COAL 400 /* 996 */
+#define DEF_TX_MAX_DESC 60 /* was 40 */
+#define DEF_RX_COAL 120 /* 1000 */
+#define DEF_RX_MAX_DESC 25
+#define DEF_TX_RATIO 21 /* 24 */
+
+#define DEF_JUMBO_TX_COAL 20
+#define DEF_JUMBO_TX_MAX_DESC 60
+#define DEF_JUMBO_RX_COAL 30
+#define DEF_JUMBO_RX_MAX_DESC 6
+#define DEF_JUMBO_TX_RATIO 21
+
+#if tigon2FwReleaseLocal < 20001118
+/*
+ * Standard firmware and early modifications duplicate
+ * IRQ load without this flag (coal timer is never reset).
+ * Note that with this flag tx_coal should be less than
+ * time to xmit full tx ring.
+ * 400usec is not so bad for tx ring size of 128.
+ */
+#define TX_COAL_INTS_ONLY 1 /* worth it */
+#else
+/*
+ * With modified firmware, this is not necessary, but still useful.
+ */
+#define TX_COAL_INTS_ONLY 1
+#endif
+
+#define DEF_TRACE 0
+#define DEF_STAT (2 * TICKS_PER_SEC)
+
+
+static int link_state[ACE_MAX_MOD_PARMS];
+static int trace[ACE_MAX_MOD_PARMS];
+static int tx_coal_tick[ACE_MAX_MOD_PARMS];
+static int rx_coal_tick[ACE_MAX_MOD_PARMS];
+static int max_tx_desc[ACE_MAX_MOD_PARMS];
+static int max_rx_desc[ACE_MAX_MOD_PARMS];
+static int tx_ratio[ACE_MAX_MOD_PARMS];
+static int dis_pci_mem_inval[ACE_MAX_MOD_PARMS] = {1, 1, 1, 1, 1, 1, 1, 1};
+
+MODULE_AUTHOR("Jes Sorensen <jes@trained-monkey.org>");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("AceNIC/3C985/GA620 Gigabit Ethernet driver");
+#ifndef CONFIG_ACENIC_OMIT_TIGON_I
+MODULE_FIRMWARE("acenic/tg1.bin");
+#endif
+MODULE_FIRMWARE("acenic/tg2.bin");
+
+module_param_array_named(link, link_state, int, NULL, 0);
+module_param_array(trace, int, NULL, 0);
+module_param_array(tx_coal_tick, int, NULL, 0);
+module_param_array(max_tx_desc, int, NULL, 0);
+module_param_array(rx_coal_tick, int, NULL, 0);
+module_param_array(max_rx_desc, int, NULL, 0);
+module_param_array(tx_ratio, int, NULL, 0);
+MODULE_PARM_DESC(link, "AceNIC/3C985/NetGear link state");
+MODULE_PARM_DESC(trace, "AceNIC/3C985/NetGear firmware trace level");
+MODULE_PARM_DESC(tx_coal_tick, "AceNIC/3C985/GA620 max clock ticks to wait from first tx descriptor arrives");
+MODULE_PARM_DESC(max_tx_desc, "AceNIC/3C985/GA620 max number of transmit descriptors to wait");
+MODULE_PARM_DESC(rx_coal_tick, "AceNIC/3C985/GA620 max clock ticks to wait from first rx descriptor arrives");
+MODULE_PARM_DESC(max_rx_desc, "AceNIC/3C985/GA620 max number of receive descriptors to wait");
+MODULE_PARM_DESC(tx_ratio, "AceNIC/3C985/GA620 ratio of NIC memory used for TX/RX descriptors (range 0-63)");
+
+
+static const char version[] __devinitconst =
+ "acenic.c: v0.92 08/05/2002 Jes Sorensen, linux-acenic@SunSITE.dk\n"
+ " http://home.cern.ch/~jes/gige/acenic.html\n";
+
+static int ace_get_settings(struct net_device *, struct ethtool_cmd *);
+static int ace_set_settings(struct net_device *, struct ethtool_cmd *);
+static void ace_get_drvinfo(struct net_device *, struct ethtool_drvinfo *);
+
+static const struct ethtool_ops ace_ethtool_ops = {
+ .get_settings = ace_get_settings,
+ .set_settings = ace_set_settings,
+ .get_drvinfo = ace_get_drvinfo,
+};
+
+static void ace_watchdog(struct net_device *dev);
+
+static const struct net_device_ops ace_netdev_ops = {
+ .ndo_open = ace_open,
+ .ndo_stop = ace_close,
+ .ndo_tx_timeout = ace_watchdog,
+ .ndo_get_stats = ace_get_stats,
+ .ndo_start_xmit = ace_start_xmit,
+ .ndo_set_multicast_list = ace_set_multicast_list,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_set_mac_address = ace_set_mac_addr,
+ .ndo_change_mtu = ace_change_mtu,
+};
+
+static int __devinit acenic_probe_one(struct pci_dev *pdev,
+ const struct pci_device_id *id)
+{
+ struct net_device *dev;
+ struct ace_private *ap;
+ static int boards_found;
+
+ dev = alloc_etherdev(sizeof(struct ace_private));
+ if (dev == NULL) {
+ printk(KERN_ERR "acenic: Unable to allocate "
+ "net_device structure!\n");
+ return -ENOMEM;
+ }
+
+ SET_NETDEV_DEV(dev, &pdev->dev);
+
+ ap = netdev_priv(dev);
+ ap->pdev = pdev;
+ ap->name = pci_name(pdev);
+
+ dev->features |= NETIF_F_SG | NETIF_F_IP_CSUM;
+ dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;
+
+ dev->watchdog_timeo = 5*HZ;
+
+ dev->netdev_ops = &ace_netdev_ops;
+ SET_ETHTOOL_OPS(dev, &ace_ethtool_ops);
+
+ /* we only display this string ONCE */
+ if (!boards_found)
+ printk(version);
+
+ if (pci_enable_device(pdev))
+ goto fail_free_netdev;
+
+ /*
+ * Enable master mode before we start playing with the
+ * pci_command word since pci_set_master() will modify
+ * it.
+ */
+ pci_set_master(pdev);
+
+ pci_read_config_word(pdev, PCI_COMMAND, &ap->pci_command);
+
+ /* OpenFirmware on Mac's does not set this - DOH.. */
+ if (!(ap->pci_command & PCI_COMMAND_MEMORY)) {
+ printk(KERN_INFO "%s: Enabling PCI Memory Mapped "
+ "access - was not enabled by BIOS/Firmware\n",
+ ap->name);
+ ap->pci_command = ap->pci_command | PCI_COMMAND_MEMORY;
+ pci_write_config_word(ap->pdev, PCI_COMMAND,
+ ap->pci_command);
+ wmb();
+ }
+
+ pci_read_config_byte(pdev, PCI_LATENCY_TIMER, &ap->pci_latency);
+ if (ap->pci_latency <= 0x40) {
+ ap->pci_latency = 0x40;
+ pci_write_config_byte(pdev, PCI_LATENCY_TIMER, ap->pci_latency);
+ }
+
+ /*
+ * Remap the regs into kernel space - this is abuse of
+ * dev->base_addr since it was means for I/O port
+ * addresses but who gives a damn.
+ */
+ dev->base_addr = pci_resource_start(pdev, 0);
+ ap->regs = ioremap(dev->base_addr, 0x4000);
+ if (!ap->regs) {
+ printk(KERN_ERR "%s: Unable to map I/O register, "
+ "AceNIC %i will be disabled.\n",
+ ap->name, boards_found);
+ goto fail_free_netdev;
+ }
+
+ switch(pdev->vendor) {
+ case PCI_VENDOR_ID_ALTEON:
+ if (pdev->device == PCI_DEVICE_ID_FARALLON_PN9100T) {
+ printk(KERN_INFO "%s: Farallon PN9100-T ",
+ ap->name);
+ } else {
+ printk(KERN_INFO "%s: Alteon AceNIC ",
+ ap->name);
+ }
+ break;
+ case PCI_VENDOR_ID_3COM:
+ printk(KERN_INFO "%s: 3Com 3C985 ", ap->name);
+ break;
+ case PCI_VENDOR_ID_NETGEAR:
+ printk(KERN_INFO "%s: NetGear GA620 ", ap->name);
+ break;
+ case PCI_VENDOR_ID_DEC:
+ if (pdev->device == PCI_DEVICE_ID_FARALLON_PN9000SX) {
+ printk(KERN_INFO "%s: Farallon PN9000-SX ",
+ ap->name);
+ break;
+ }
+ case PCI_VENDOR_ID_SGI:
+ printk(KERN_INFO "%s: SGI AceNIC ", ap->name);
+ break;
+ default:
+ printk(KERN_INFO "%s: Unknown AceNIC ", ap->name);
+ break;
+ }
+
+ printk("Gigabit Ethernet at 0x%08lx, ", dev->base_addr);
+ printk("irq %d\n", pdev->irq);
+
+#ifdef CONFIG_ACENIC_OMIT_TIGON_I
+ if ((readl(&ap->regs->HostCtrl) >> 28) == 4) {
+ printk(KERN_ERR "%s: Driver compiled without Tigon I"
+ " support - NIC disabled\n", dev->name);
+ goto fail_uninit;
+ }
+#endif
+
+ if (ace_allocate_descriptors(dev))
+ goto fail_free_netdev;
+
+#ifdef MODULE
+ if (boards_found >= ACE_MAX_MOD_PARMS)
+ ap->board_idx = BOARD_IDX_OVERFLOW;
+ else
+ ap->board_idx = boards_found;
+#else
+ ap->board_idx = BOARD_IDX_STATIC;
+#endif
+
+ if (ace_init(dev))
+ goto fail_free_netdev;
+
+ if (register_netdev(dev)) {
+ printk(KERN_ERR "acenic: device registration failed\n");
+ goto fail_uninit;
+ }
+ ap->name = dev->name;
+
+ if (ap->pci_using_dac)
+ dev->features |= NETIF_F_HIGHDMA;
+
+ pci_set_drvdata(pdev, dev);
+
+ boards_found++;
+ return 0;
+
+ fail_uninit:
+ ace_init_cleanup(dev);
+ fail_free_netdev:
+ free_netdev(dev);
+ return -ENODEV;
+}
+
+static void __devexit acenic_remove_one(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+ short i;
+
+ unregister_netdev(dev);
+
+ writel(readl(®s->CpuCtrl) | CPU_HALT, ®s->CpuCtrl);
+ if (ap->version >= 2)
+ writel(readl(®s->CpuBCtrl) | CPU_HALT, ®s->CpuBCtrl);
+
+ /*
+ * This clears any pending interrupts
+ */
+ writel(1, ®s->Mb0Lo);
+ readl(®s->CpuCtrl); /* flush */
+
+ /*
+ * Make sure no other CPUs are processing interrupts
+ * on the card before the buffers are being released.
+ * Otherwise one might experience some `interesting'
+ * effects.
+ *
+ * Then release the RX buffers - jumbo buffers were
+ * already released in ace_close().
+ */
+ ace_sync_irq(dev->irq);
+
+ for (i = 0; i < RX_STD_RING_ENTRIES; i++) {
+ struct sk_buff *skb = ap->skb->rx_std_skbuff[i].skb;
+
+ if (skb) {
+ struct ring_info *ringp;
+ dma_addr_t mapping;
+
+ ringp = &ap->skb->rx_std_skbuff[i];
+ mapping = dma_unmap_addr(ringp, mapping);
+ pci_unmap_page(ap->pdev, mapping,
+ ACE_STD_BUFSIZE,
+ PCI_DMA_FROMDEVICE);
+
+ ap->rx_std_ring[i].size = 0;
+ ap->skb->rx_std_skbuff[i].skb = NULL;
+ dev_kfree_skb(skb);
+ }
+ }
+
+ if (ap->version >= 2) {
+ for (i = 0; i < RX_MINI_RING_ENTRIES; i++) {
+ struct sk_buff *skb = ap->skb->rx_mini_skbuff[i].skb;
+
+ if (skb) {
+ struct ring_info *ringp;
+ dma_addr_t mapping;
+
+ ringp = &ap->skb->rx_mini_skbuff[i];
+ mapping = dma_unmap_addr(ringp,mapping);
+ pci_unmap_page(ap->pdev, mapping,
+ ACE_MINI_BUFSIZE,
+ PCI_DMA_FROMDEVICE);
+
+ ap->rx_mini_ring[i].size = 0;
+ ap->skb->rx_mini_skbuff[i].skb = NULL;
+ dev_kfree_skb(skb);
+ }
+ }
+ }
+
+ for (i = 0; i < RX_JUMBO_RING_ENTRIES; i++) {
+ struct sk_buff *skb = ap->skb->rx_jumbo_skbuff[i].skb;
+ if (skb) {
+ struct ring_info *ringp;
+ dma_addr_t mapping;
+
+ ringp = &ap->skb->rx_jumbo_skbuff[i];
+ mapping = dma_unmap_addr(ringp, mapping);
+ pci_unmap_page(ap->pdev, mapping,
+ ACE_JUMBO_BUFSIZE,
+ PCI_DMA_FROMDEVICE);
+
+ ap->rx_jumbo_ring[i].size = 0;
+ ap->skb->rx_jumbo_skbuff[i].skb = NULL;
+ dev_kfree_skb(skb);
+ }
+ }
+
+ ace_init_cleanup(dev);
+ free_netdev(dev);
+}
+
+static struct pci_driver acenic_pci_driver = {
+ .name = "acenic",
+ .id_table = acenic_pci_tbl,
+ .probe = acenic_probe_one,
+ .remove = __devexit_p(acenic_remove_one),
+};
+
+static int __init acenic_init(void)
+{
+ return pci_register_driver(&acenic_pci_driver);
+}
+
+static void __exit acenic_exit(void)
+{
+ pci_unregister_driver(&acenic_pci_driver);
+}
+
+module_init(acenic_init);
+module_exit(acenic_exit);
+
+static void ace_free_descriptors(struct net_device *dev)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ int size;
+
+ if (ap->rx_std_ring != NULL) {
+ size = (sizeof(struct rx_desc) *
+ (RX_STD_RING_ENTRIES +
+ RX_JUMBO_RING_ENTRIES +
+ RX_MINI_RING_ENTRIES +
+ RX_RETURN_RING_ENTRIES));
+ pci_free_consistent(ap->pdev, size, ap->rx_std_ring,
+ ap->rx_ring_base_dma);
+ ap->rx_std_ring = NULL;
+ ap->rx_jumbo_ring = NULL;
+ ap->rx_mini_ring = NULL;
+ ap->rx_return_ring = NULL;
+ }
+ if (ap->evt_ring != NULL) {
+ size = (sizeof(struct event) * EVT_RING_ENTRIES);
+ pci_free_consistent(ap->pdev, size, ap->evt_ring,
+ ap->evt_ring_dma);
+ ap->evt_ring = NULL;
+ }
+ if (ap->tx_ring != NULL && !ACE_IS_TIGON_I(ap)) {
+ size = (sizeof(struct tx_desc) * MAX_TX_RING_ENTRIES);
+ pci_free_consistent(ap->pdev, size, ap->tx_ring,
+ ap->tx_ring_dma);
+ }
+ ap->tx_ring = NULL;
+
+ if (ap->evt_prd != NULL) {
+ pci_free_consistent(ap->pdev, sizeof(u32),
+ (void *)ap->evt_prd, ap->evt_prd_dma);
+ ap->evt_prd = NULL;
+ }
+ if (ap->rx_ret_prd != NULL) {
+ pci_free_consistent(ap->pdev, sizeof(u32),
+ (void *)ap->rx_ret_prd,
+ ap->rx_ret_prd_dma);
+ ap->rx_ret_prd = NULL;
+ }
+ if (ap->tx_csm != NULL) {
+ pci_free_consistent(ap->pdev, sizeof(u32),
+ (void *)ap->tx_csm, ap->tx_csm_dma);
+ ap->tx_csm = NULL;
+ }
+}
+
+
+static int ace_allocate_descriptors(struct net_device *dev)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ int size;
+
+ size = (sizeof(struct rx_desc) *
+ (RX_STD_RING_ENTRIES +
+ RX_JUMBO_RING_ENTRIES +
+ RX_MINI_RING_ENTRIES +
+ RX_RETURN_RING_ENTRIES));
+
+ ap->rx_std_ring = pci_alloc_consistent(ap->pdev, size,
+ &ap->rx_ring_base_dma);
+ if (ap->rx_std_ring == NULL)
+ goto fail;
+
+ ap->rx_jumbo_ring = ap->rx_std_ring + RX_STD_RING_ENTRIES;
+ ap->rx_mini_ring = ap->rx_jumbo_ring + RX_JUMBO_RING_ENTRIES;
+ ap->rx_return_ring = ap->rx_mini_ring + RX_MINI_RING_ENTRIES;
+
+ size = (sizeof(struct event) * EVT_RING_ENTRIES);
+
+ ap->evt_ring = pci_alloc_consistent(ap->pdev, size, &ap->evt_ring_dma);
+
+ if (ap->evt_ring == NULL)
+ goto fail;
+
+ /*
+ * Only allocate a host TX ring for the Tigon II, the Tigon I
+ * has to use PCI registers for this ;-(
+ */
+ if (!ACE_IS_TIGON_I(ap)) {
+ size = (sizeof(struct tx_desc) * MAX_TX_RING_ENTRIES);
+
+ ap->tx_ring = pci_alloc_consistent(ap->pdev, size,
+ &ap->tx_ring_dma);
+
+ if (ap->tx_ring == NULL)
+ goto fail;
+ }
+
+ ap->evt_prd = pci_alloc_consistent(ap->pdev, sizeof(u32),
+ &ap->evt_prd_dma);
+ if (ap->evt_prd == NULL)
+ goto fail;
+
+ ap->rx_ret_prd = pci_alloc_consistent(ap->pdev, sizeof(u32),
+ &ap->rx_ret_prd_dma);
+ if (ap->rx_ret_prd == NULL)
+ goto fail;
+
+ ap->tx_csm = pci_alloc_consistent(ap->pdev, sizeof(u32),
+ &ap->tx_csm_dma);
+ if (ap->tx_csm == NULL)
+ goto fail;
+
+ return 0;
+
+fail:
+ /* Clean up. */
+ ace_init_cleanup(dev);
+ return 1;
+}
+
+
+/*
+ * Generic cleanup handling data allocated during init. Used when the
+ * module is unloaded or if an error occurs during initialization
+ */
+static void ace_init_cleanup(struct net_device *dev)
+{
+ struct ace_private *ap;
+
+ ap = netdev_priv(dev);
+
+ ace_free_descriptors(dev);
+
+ if (ap->info)
+ pci_free_consistent(ap->pdev, sizeof(struct ace_info),
+ ap->info, ap->info_dma);
+ kfree(ap->skb);
+ kfree(ap->trace_buf);
+
+ if (dev->irq)
+ free_irq(dev->irq, dev);
+
+ iounmap(ap->regs);
+}
+
+
+/*
+ * Commands are considered to be slow.
+ */
+static inline void ace_issue_cmd(struct ace_regs __iomem *regs, struct cmd *cmd)
+{
+ u32 idx;
+
+ idx = readl(®s->CmdPrd);
+
+ writel(*(u32 *)(cmd), ®s->CmdRng[idx]);
+ idx = (idx + 1) % CMD_RING_ENTRIES;
+
+ writel(idx, ®s->CmdPrd);
+}
+
+
+static int __devinit ace_init(struct net_device *dev)
+{
+ struct ace_private *ap;
+ struct ace_regs __iomem *regs;
+ struct ace_info *info = NULL;
+ struct pci_dev *pdev;
+ unsigned long myjif;
+ u64 tmp_ptr;
+ u32 tig_ver, mac1, mac2, tmp, pci_state;
+ int board_idx, ecode = 0;
+ short i;
+ unsigned char cache_size;
+
+ ap = netdev_priv(dev);
+ regs = ap->regs;
+
+ board_idx = ap->board_idx;
+
+ /*
+ * aman@sgi.com - its useful to do a NIC reset here to
+ * address the `Firmware not running' problem subsequent
+ * to any crashes involving the NIC
+ */
+ writel(HW_RESET | (HW_RESET << 24), ®s->HostCtrl);
+ readl(®s->HostCtrl); /* PCI write posting */
+ udelay(5);
+
+ /*
+ * Don't access any other registers before this point!
+ */
+#ifdef __BIG_ENDIAN
+ /*
+ * This will most likely need BYTE_SWAP once we switch
+ * to using __raw_writel()
+ */
+ writel((WORD_SWAP | CLR_INT | ((WORD_SWAP | CLR_INT) << 24)),
+ ®s->HostCtrl);
+#else
+ writel((CLR_INT | WORD_SWAP | ((CLR_INT | WORD_SWAP) << 24)),
+ ®s->HostCtrl);
+#endif
+ readl(®s->HostCtrl); /* PCI write posting */
+
+ /*
+ * Stop the NIC CPU and clear pending interrupts
+ */
+ writel(readl(®s->CpuCtrl) | CPU_HALT, ®s->CpuCtrl);
+ readl(®s->CpuCtrl); /* PCI write posting */
+ writel(0, ®s->Mb0Lo);
+
+ tig_ver = readl(®s->HostCtrl) >> 28;
+
+ switch(tig_ver){
+#ifndef CONFIG_ACENIC_OMIT_TIGON_I
+ case 4:
+ case 5:
+ printk(KERN_INFO " Tigon I (Rev. %i), Firmware: %i.%i.%i, ",
+ tig_ver, ap->firmware_major, ap->firmware_minor,
+ ap->firmware_fix);
+ writel(0, ®s->LocalCtrl);
+ ap->version = 1;
+ ap->tx_ring_entries = TIGON_I_TX_RING_ENTRIES;
+ break;
+#endif
+ case 6:
+ printk(KERN_INFO " Tigon II (Rev. %i), Firmware: %i.%i.%i, ",
+ tig_ver, ap->firmware_major, ap->firmware_minor,
+ ap->firmware_fix);
+ writel(readl(®s->CpuBCtrl) | CPU_HALT, ®s->CpuBCtrl);
+ readl(®s->CpuBCtrl); /* PCI write posting */
+ /*
+ * The SRAM bank size does _not_ indicate the amount
+ * of memory on the card, it controls the _bank_ size!
+ * Ie. a 1MB AceNIC will have two banks of 512KB.
+ */
+ writel(SRAM_BANK_512K, ®s->LocalCtrl);
+ writel(SYNC_SRAM_TIMING, ®s->MiscCfg);
+ ap->version = 2;
+ ap->tx_ring_entries = MAX_TX_RING_ENTRIES;
+ break;
+ default:
+ printk(KERN_WARNING " Unsupported Tigon version detected "
+ "(%i)\n", tig_ver);
+ ecode = -ENODEV;
+ goto init_error;
+ }
+
+ /*
+ * ModeStat _must_ be set after the SRAM settings as this change
+ * seems to corrupt the ModeStat and possible other registers.
+ * The SRAM settings survive resets and setting it to the same
+ * value a second time works as well. This is what caused the
+ * `Firmware not running' problem on the Tigon II.
+ */
+#ifdef __BIG_ENDIAN
+ writel(ACE_BYTE_SWAP_DMA | ACE_WARN | ACE_FATAL | ACE_BYTE_SWAP_BD |
+ ACE_WORD_SWAP_BD | ACE_NO_JUMBO_FRAG, ®s->ModeStat);
+#else
+ writel(ACE_BYTE_SWAP_DMA | ACE_WARN | ACE_FATAL |
+ ACE_WORD_SWAP_BD | ACE_NO_JUMBO_FRAG, ®s->ModeStat);
+#endif
+ readl(®s->ModeStat); /* PCI write posting */
+
+ mac1 = 0;
+ for(i = 0; i < 4; i++) {
+ int t;
+
+ mac1 = mac1 << 8;
+ t = read_eeprom_byte(dev, 0x8c+i);
+ if (t < 0) {
+ ecode = -EIO;
+ goto init_error;
+ } else
+ mac1 |= (t & 0xff);
+ }
+ mac2 = 0;
+ for(i = 4; i < 8; i++) {
+ int t;
+
+ mac2 = mac2 << 8;
+ t = read_eeprom_byte(dev, 0x8c+i);
+ if (t < 0) {
+ ecode = -EIO;
+ goto init_error;
+ } else
+ mac2 |= (t & 0xff);
+ }
+
+ writel(mac1, ®s->MacAddrHi);
+ writel(mac2, ®s->MacAddrLo);
+
+ dev->dev_addr[0] = (mac1 >> 8) & 0xff;
+ dev->dev_addr[1] = mac1 & 0xff;
+ dev->dev_addr[2] = (mac2 >> 24) & 0xff;
+ dev->dev_addr[3] = (mac2 >> 16) & 0xff;
+ dev->dev_addr[4] = (mac2 >> 8) & 0xff;
+ dev->dev_addr[5] = mac2 & 0xff;
+
+ printk("MAC: %pM\n", dev->dev_addr);
+
+ /*
+ * Looks like this is necessary to deal with on all architectures,
+ * even this %$#%$# N440BX Intel based thing doesn't get it right.
+ * Ie. having two NICs in the machine, one will have the cache
+ * line set at boot time, the other will not.
+ */
+ pdev = ap->pdev;
+ pci_read_config_byte(pdev, PCI_CACHE_LINE_SIZE, &cache_size);
+ cache_size <<= 2;
+ if (cache_size != SMP_CACHE_BYTES) {
+ printk(KERN_INFO " PCI cache line size set incorrectly "
+ "(%i bytes) by BIOS/FW, ", cache_size);
+ if (cache_size > SMP_CACHE_BYTES)
+ printk("expecting %i\n", SMP_CACHE_BYTES);
+ else {
+ printk("correcting to %i\n", SMP_CACHE_BYTES);
+ pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE,
+ SMP_CACHE_BYTES >> 2);
+ }
+ }
+
+ pci_state = readl(®s->PciState);
+ printk(KERN_INFO " PCI bus width: %i bits, speed: %iMHz, "
+ "latency: %i clks\n",
+ (pci_state & PCI_32BIT) ? 32 : 64,
+ (pci_state & PCI_66MHZ) ? 66 : 33,
+ ap->pci_latency);
+
+ /*
+ * Set the max DMA transfer size. Seems that for most systems
+ * the performance is better when no MAX parameter is
+ * set. However for systems enabling PCI write and invalidate,
+ * DMA writes must be set to the L1 cache line size to get
+ * optimal performance.
+ *
+ * The default is now to turn the PCI write and invalidate off
+ * - that is what Alteon does for NT.
+ */
+ tmp = READ_CMD_MEM | WRITE_CMD_MEM;
+ if (ap->version >= 2) {
+ tmp |= (MEM_READ_MULTIPLE | (pci_state & PCI_66MHZ));
+ /*
+ * Tuning parameters only supported for 8 cards
+ */
+ if (board_idx == BOARD_IDX_OVERFLOW ||
+ dis_pci_mem_inval[board_idx]) {
+ if (ap->pci_command & PCI_COMMAND_INVALIDATE) {
+ ap->pci_command &= ~PCI_COMMAND_INVALIDATE;
+ pci_write_config_word(pdev, PCI_COMMAND,
+ ap->pci_command);
+ printk(KERN_INFO " Disabling PCI memory "
+ "write and invalidate\n");
+ }
+ } else if (ap->pci_command & PCI_COMMAND_INVALIDATE) {
+ printk(KERN_INFO " PCI memory write & invalidate "
+ "enabled by BIOS, enabling counter measures\n");
+
+ switch(SMP_CACHE_BYTES) {
+ case 16:
+ tmp |= DMA_WRITE_MAX_16;
+ break;
+ case 32:
+ tmp |= DMA_WRITE_MAX_32;
+ break;
+ case 64:
+ tmp |= DMA_WRITE_MAX_64;
+ break;
+ case 128:
+ tmp |= DMA_WRITE_MAX_128;
+ break;
+ default:
+ printk(KERN_INFO " Cache line size %i not "
+ "supported, PCI write and invalidate "
+ "disabled\n", SMP_CACHE_BYTES);
+ ap->pci_command &= ~PCI_COMMAND_INVALIDATE;
+ pci_write_config_word(pdev, PCI_COMMAND,
+ ap->pci_command);
+ }
+ }
+ }
+
+#ifdef __sparc__
+ /*
+ * On this platform, we know what the best dma settings
+ * are. We use 64-byte maximum bursts, because if we
+ * burst larger than the cache line size (or even cross
+ * a 64byte boundary in a single burst) the UltraSparc
+ * PCI controller will disconnect at 64-byte multiples.
+ *
+ * Read-multiple will be properly enabled above, and when
+ * set will give the PCI controller proper hints about
+ * prefetching.
+ */
+ tmp &= ~DMA_READ_WRITE_MASK;
+ tmp |= DMA_READ_MAX_64;
+ tmp |= DMA_WRITE_MAX_64;
+#endif
+#ifdef __alpha__
+ tmp &= ~DMA_READ_WRITE_MASK;
+ tmp |= DMA_READ_MAX_128;
+ /*
+ * All the docs say MUST NOT. Well, I did.
+ * Nothing terrible happens, if we load wrong size.
+ * Bit w&i still works better!
+ */
+ tmp |= DMA_WRITE_MAX_128;
+#endif
+ writel(tmp, ®s->PciState);
+
+#if 0
+ /*
+ * The Host PCI bus controller driver has to set FBB.
+ * If all devices on that PCI bus support FBB, then the controller
+ * can enable FBB support in the Host PCI Bus controller (or on
+ * the PCI-PCI bridge if that applies).
+ * -ggg
+ */
+ /*
+ * I have received reports from people having problems when this
+ * bit is enabled.
+ */
+ if (!(ap->pci_command & PCI_COMMAND_FAST_BACK)) {
+ printk(KERN_INFO " Enabling PCI Fast Back to Back\n");
+ ap->pci_command |= PCI_COMMAND_FAST_BACK;
+ pci_write_config_word(pdev, PCI_COMMAND, ap->pci_command);
+ }
+#endif
+
+ /*
+ * Configure DMA attributes.
+ */
+ if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
+ ap->pci_using_dac = 1;
+ } else if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) {
+ ap->pci_using_dac = 0;
+ } else {
+ ecode = -ENODEV;
+ goto init_error;
+ }
+
+ /*
+ * Initialize the generic info block and the command+event rings
+ * and the control blocks for the transmit and receive rings
+ * as they need to be setup once and for all.
+ */
+ if (!(info = pci_alloc_consistent(ap->pdev, sizeof(struct ace_info),
+ &ap->info_dma))) {
+ ecode = -EAGAIN;
+ goto init_error;
+ }
+ ap->info = info;
+
+ /*
+ * Get the memory for the skb rings.
+ */
+ if (!(ap->skb = kmalloc(sizeof(struct ace_skb), GFP_KERNEL))) {
+ ecode = -EAGAIN;
+ goto init_error;
+ }
+
+ ecode = request_irq(pdev->irq, ace_interrupt, IRQF_SHARED,
+ DRV_NAME, dev);
+ if (ecode) {
+ printk(KERN_WARNING "%s: Requested IRQ %d is busy\n",
+ DRV_NAME, pdev->irq);
+ goto init_error;
+ } else
+ dev->irq = pdev->irq;
+
+#ifdef INDEX_DEBUG
+ spin_lock_init(&ap->debug_lock);
+ ap->last_tx = ACE_TX_RING_ENTRIES(ap) - 1;
+ ap->last_std_rx = 0;
+ ap->last_mini_rx = 0;
+#endif
+
+ memset(ap->info, 0, sizeof(struct ace_info));
+ memset(ap->skb, 0, sizeof(struct ace_skb));
+
+ ecode = ace_load_firmware(dev);
+ if (ecode)
+ goto init_error;
+
+ ap->fw_running = 0;
+
+ tmp_ptr = ap->info_dma;
+ writel(tmp_ptr >> 32, ®s->InfoPtrHi);
+ writel(tmp_ptr & 0xffffffff, ®s->InfoPtrLo);
+
+ memset(ap->evt_ring, 0, EVT_RING_ENTRIES * sizeof(struct event));
+
+ set_aceaddr(&info->evt_ctrl.rngptr, ap->evt_ring_dma);
+ info->evt_ctrl.flags = 0;
+
+ *(ap->evt_prd) = 0;
+ wmb();
+ set_aceaddr(&info->evt_prd_ptr, ap->evt_prd_dma);
+ writel(0, ®s->EvtCsm);
+
+ set_aceaddr(&info->cmd_ctrl.rngptr, 0x100);
+ info->cmd_ctrl.flags = 0;
+ info->cmd_ctrl.max_len = 0;
+
+ for (i = 0; i < CMD_RING_ENTRIES; i++)
+ writel(0, ®s->CmdRng[i]);
+
+ writel(0, ®s->CmdPrd);
+ writel(0, ®s->CmdCsm);
+
+ tmp_ptr = ap->info_dma;
+ tmp_ptr += (unsigned long) &(((struct ace_info *)0)->s.stats);
+ set_aceaddr(&info->stats2_ptr, (dma_addr_t) tmp_ptr);
+
+ set_aceaddr(&info->rx_std_ctrl.rngptr, ap->rx_ring_base_dma);
+ info->rx_std_ctrl.max_len = ACE_STD_BUFSIZE;
+ info->rx_std_ctrl.flags =
+ RCB_FLG_TCP_UDP_SUM | RCB_FLG_NO_PSEUDO_HDR | RCB_FLG_VLAN_ASSIST;
+
+ memset(ap->rx_std_ring, 0,
+ RX_STD_RING_ENTRIES * sizeof(struct rx_desc));
+
+ for (i = 0; i < RX_STD_RING_ENTRIES; i++)
+ ap->rx_std_ring[i].flags = BD_FLG_TCP_UDP_SUM;
+
+ ap->rx_std_skbprd = 0;
+ atomic_set(&ap->cur_rx_bufs, 0);
+
+ set_aceaddr(&info->rx_jumbo_ctrl.rngptr,
+ (ap->rx_ring_base_dma +
+ (sizeof(struct rx_desc) * RX_STD_RING_ENTRIES)));
+ info->rx_jumbo_ctrl.max_len = 0;
+ info->rx_jumbo_ctrl.flags =
+ RCB_FLG_TCP_UDP_SUM | RCB_FLG_NO_PSEUDO_HDR | RCB_FLG_VLAN_ASSIST;
+
+ memset(ap->rx_jumbo_ring, 0,
+ RX_JUMBO_RING_ENTRIES * sizeof(struct rx_desc));
+
+ for (i = 0; i < RX_JUMBO_RING_ENTRIES; i++)
+ ap->rx_jumbo_ring[i].flags = BD_FLG_TCP_UDP_SUM | BD_FLG_JUMBO;
+
+ ap->rx_jumbo_skbprd = 0;
+ atomic_set(&ap->cur_jumbo_bufs, 0);
+
+ memset(ap->rx_mini_ring, 0,
+ RX_MINI_RING_ENTRIES * sizeof(struct rx_desc));
+
+ if (ap->version >= 2) {
+ set_aceaddr(&info->rx_mini_ctrl.rngptr,
+ (ap->rx_ring_base_dma +
+ (sizeof(struct rx_desc) *
+ (RX_STD_RING_ENTRIES +
+ RX_JUMBO_RING_ENTRIES))));
+ info->rx_mini_ctrl.max_len = ACE_MINI_SIZE;
+ info->rx_mini_ctrl.flags =
+ RCB_FLG_TCP_UDP_SUM|RCB_FLG_NO_PSEUDO_HDR|RCB_FLG_VLAN_ASSIST;
+
+ for (i = 0; i < RX_MINI_RING_ENTRIES; i++)
+ ap->rx_mini_ring[i].flags =
+ BD_FLG_TCP_UDP_SUM | BD_FLG_MINI;
+ } else {
+ set_aceaddr(&info->rx_mini_ctrl.rngptr, 0);
+ info->rx_mini_ctrl.flags = RCB_FLG_RNG_DISABLE;
+ info->rx_mini_ctrl.max_len = 0;
+ }
+
+ ap->rx_mini_skbprd = 0;
+ atomic_set(&ap->cur_mini_bufs, 0);
+
+ set_aceaddr(&info->rx_return_ctrl.rngptr,
+ (ap->rx_ring_base_dma +
+ (sizeof(struct rx_desc) *
+ (RX_STD_RING_ENTRIES +
+ RX_JUMBO_RING_ENTRIES +
+ RX_MINI_RING_ENTRIES))));
+ info->rx_return_ctrl.flags = 0;
+ info->rx_return_ctrl.max_len = RX_RETURN_RING_ENTRIES;
+
+ memset(ap->rx_return_ring, 0,
+ RX_RETURN_RING_ENTRIES * sizeof(struct rx_desc));
+
+ set_aceaddr(&info->rx_ret_prd_ptr, ap->rx_ret_prd_dma);
+ *(ap->rx_ret_prd) = 0;
+
+ writel(TX_RING_BASE, ®s->WinBase);
+
+ if (ACE_IS_TIGON_I(ap)) {
+ ap->tx_ring = (__force struct tx_desc *) regs->Window;
+ for (i = 0; i < (TIGON_I_TX_RING_ENTRIES
+ * sizeof(struct tx_desc)) / sizeof(u32); i++)
+ writel(0, (__force void __iomem *)ap->tx_ring + i * 4);
+
+ set_aceaddr(&info->tx_ctrl.rngptr, TX_RING_BASE);
+ } else {
+ memset(ap->tx_ring, 0,
+ MAX_TX_RING_ENTRIES * sizeof(struct tx_desc));
+
+ set_aceaddr(&info->tx_ctrl.rngptr, ap->tx_ring_dma);
+ }
+
+ info->tx_ctrl.max_len = ACE_TX_RING_ENTRIES(ap);
+ tmp = RCB_FLG_TCP_UDP_SUM | RCB_FLG_NO_PSEUDO_HDR | RCB_FLG_VLAN_ASSIST;
+
+ /*
+ * The Tigon I does not like having the TX ring in host memory ;-(
+ */
+ if (!ACE_IS_TIGON_I(ap))
+ tmp |= RCB_FLG_TX_HOST_RING;
+#if TX_COAL_INTS_ONLY
+ tmp |= RCB_FLG_COAL_INT_ONLY;
+#endif
+ info->tx_ctrl.flags = tmp;
+
+ set_aceaddr(&info->tx_csm_ptr, ap->tx_csm_dma);
+
+ /*
+ * Potential item for tuning parameter
+ */
+#if 0 /* NO */
+ writel(DMA_THRESH_16W, ®s->DmaReadCfg);
+ writel(DMA_THRESH_16W, ®s->DmaWriteCfg);
+#else
+ writel(DMA_THRESH_8W, ®s->DmaReadCfg);
+ writel(DMA_THRESH_8W, ®s->DmaWriteCfg);
+#endif
+
+ writel(0, ®s->MaskInt);
+ writel(1, ®s->IfIdx);
+#if 0
+ /*
+ * McKinley boxes do not like us fiddling with AssistState
+ * this early
+ */
+ writel(1, ®s->AssistState);
+#endif
+
+ writel(DEF_STAT, ®s->TuneStatTicks);
+ writel(DEF_TRACE, ®s->TuneTrace);
+
+ ace_set_rxtx_parms(dev, 0);
+
+ if (board_idx == BOARD_IDX_OVERFLOW) {
+ printk(KERN_WARNING "%s: more than %i NICs detected, "
+ "ignoring module parameters!\n",
+ ap->name, ACE_MAX_MOD_PARMS);
+ } else if (board_idx >= 0) {
+ if (tx_coal_tick[board_idx])
+ writel(tx_coal_tick[board_idx],
+ ®s->TuneTxCoalTicks);
+ if (max_tx_desc[board_idx])
+ writel(max_tx_desc[board_idx], ®s->TuneMaxTxDesc);
+
+ if (rx_coal_tick[board_idx])
+ writel(rx_coal_tick[board_idx],
+ ®s->TuneRxCoalTicks);
+ if (max_rx_desc[board_idx])
+ writel(max_rx_desc[board_idx], ®s->TuneMaxRxDesc);
+
+ if (trace[board_idx])
+ writel(trace[board_idx], ®s->TuneTrace);
+
+ if ((tx_ratio[board_idx] > 0) && (tx_ratio[board_idx] < 64))
+ writel(tx_ratio[board_idx], ®s->TxBufRat);
+ }
+
+ /*
+ * Default link parameters
+ */
+ tmp = LNK_ENABLE | LNK_FULL_DUPLEX | LNK_1000MB | LNK_100MB |
+ LNK_10MB | LNK_RX_FLOW_CTL_Y | LNK_NEG_FCTL | LNK_NEGOTIATE;
+ if(ap->version >= 2)
+ tmp |= LNK_TX_FLOW_CTL_Y;
+
+ /*
+ * Override link default parameters
+ */
+ if ((board_idx >= 0) && link_state[board_idx]) {
+ int option = link_state[board_idx];
+
+ tmp = LNK_ENABLE;
+
+ if (option & 0x01) {
+ printk(KERN_INFO "%s: Setting half duplex link\n",
+ ap->name);
+ tmp &= ~LNK_FULL_DUPLEX;
+ }
+ if (option & 0x02)
+ tmp &= ~LNK_NEGOTIATE;
+ if (option & 0x10)
+ tmp |= LNK_10MB;
+ if (option & 0x20)
+ tmp |= LNK_100MB;
+ if (option & 0x40)
+ tmp |= LNK_1000MB;
+ if ((option & 0x70) == 0) {
+ printk(KERN_WARNING "%s: No media speed specified, "
+ "forcing auto negotiation\n", ap->name);
+ tmp |= LNK_NEGOTIATE | LNK_1000MB |
+ LNK_100MB | LNK_10MB;
+ }
+ if ((option & 0x100) == 0)
+ tmp |= LNK_NEG_FCTL;
+ else
+ printk(KERN_INFO "%s: Disabling flow control "
+ "negotiation\n", ap->name);
+ if (option & 0x200)
+ tmp |= LNK_RX_FLOW_CTL_Y;
+ if ((option & 0x400) && (ap->version >= 2)) {
+ printk(KERN_INFO "%s: Enabling TX flow control\n",
+ ap->name);
+ tmp |= LNK_TX_FLOW_CTL_Y;
+ }
+ }
+
+ ap->link = tmp;
+ writel(tmp, ®s->TuneLink);
+ if (ap->version >= 2)
+ writel(tmp, ®s->TuneFastLink);
+
+ writel(ap->firmware_start, ®s->Pc);
+
+ writel(0, ®s->Mb0Lo);
+
+ /*
+ * Set tx_csm before we start receiving interrupts, otherwise
+ * the interrupt handler might think it is supposed to process
+ * tx ints before we are up and running, which may cause a null
+ * pointer access in the int handler.
+ */
+ ap->cur_rx = 0;
+ ap->tx_prd = *(ap->tx_csm) = ap->tx_ret_csm = 0;
+
+ wmb();
+ ace_set_txprd(regs, ap, 0);
+ writel(0, ®s->RxRetCsm);
+
+ /*
+ * Enable DMA engine now.
+ * If we do this sooner, Mckinley box pukes.
+ * I assume it's because Tigon II DMA engine wants to check
+ * *something* even before the CPU is started.
+ */
+ writel(1, ®s->AssistState); /* enable DMA */
+
+ /*
+ * Start the NIC CPU
+ */
+ writel(readl(®s->CpuCtrl) & ~(CPU_HALT|CPU_TRACE), ®s->CpuCtrl);
+ readl(®s->CpuCtrl);
+
+ /*
+ * Wait for the firmware to spin up - max 3 seconds.
+ */
+ myjif = jiffies + 3 * HZ;
+ while (time_before(jiffies, myjif) && !ap->fw_running)
+ cpu_relax();
+
+ if (!ap->fw_running) {
+ printk(KERN_ERR "%s: Firmware NOT running!\n", ap->name);
+
+ ace_dump_trace(ap);
+ writel(readl(®s->CpuCtrl) | CPU_HALT, ®s->CpuCtrl);
+ readl(®s->CpuCtrl);
+
+ /* aman@sgi.com - account for badly behaving firmware/NIC:
+ * - have observed that the NIC may continue to generate
+ * interrupts for some reason; attempt to stop it - halt
+ * second CPU for Tigon II cards, and also clear Mb0
+ * - if we're a module, we'll fail to load if this was
+ * the only GbE card in the system => if the kernel does
+ * see an interrupt from the NIC, code to handle it is
+ * gone and OOps! - so free_irq also
+ */
+ if (ap->version >= 2)
+ writel(readl(®s->CpuBCtrl) | CPU_HALT,
+ ®s->CpuBCtrl);
+ writel(0, ®s->Mb0Lo);
+ readl(®s->Mb0Lo);
+
+ ecode = -EBUSY;
+ goto init_error;
+ }
+
+ /*
+ * We load the ring here as there seem to be no way to tell the
+ * firmware to wipe the ring without re-initializing it.
+ */
+ if (!test_and_set_bit(0, &ap->std_refill_busy))
+ ace_load_std_rx_ring(dev, RX_RING_SIZE);
+ else
+ printk(KERN_ERR "%s: Someone is busy refilling the RX ring\n",
+ ap->name);
+ if (ap->version >= 2) {
+ if (!test_and_set_bit(0, &ap->mini_refill_busy))
+ ace_load_mini_rx_ring(dev, RX_MINI_SIZE);
+ else
+ printk(KERN_ERR "%s: Someone is busy refilling "
+ "the RX mini ring\n", ap->name);
+ }
+ return 0;
+
+ init_error:
+ ace_init_cleanup(dev);
+ return ecode;
+}
+
+
+static void ace_set_rxtx_parms(struct net_device *dev, int jumbo)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+ int board_idx = ap->board_idx;
+
+ if (board_idx >= 0) {
+ if (!jumbo) {
+ if (!tx_coal_tick[board_idx])
+ writel(DEF_TX_COAL, ®s->TuneTxCoalTicks);
+ if (!max_tx_desc[board_idx])
+ writel(DEF_TX_MAX_DESC, ®s->TuneMaxTxDesc);
+ if (!rx_coal_tick[board_idx])
+ writel(DEF_RX_COAL, ®s->TuneRxCoalTicks);
+ if (!max_rx_desc[board_idx])
+ writel(DEF_RX_MAX_DESC, ®s->TuneMaxRxDesc);
+ if (!tx_ratio[board_idx])
+ writel(DEF_TX_RATIO, ®s->TxBufRat);
+ } else {
+ if (!tx_coal_tick[board_idx])
+ writel(DEF_JUMBO_TX_COAL,
+ ®s->TuneTxCoalTicks);
+ if (!max_tx_desc[board_idx])
+ writel(DEF_JUMBO_TX_MAX_DESC,
+ ®s->TuneMaxTxDesc);
+ if (!rx_coal_tick[board_idx])
+ writel(DEF_JUMBO_RX_COAL,
+ ®s->TuneRxCoalTicks);
+ if (!max_rx_desc[board_idx])
+ writel(DEF_JUMBO_RX_MAX_DESC,
+ ®s->TuneMaxRxDesc);
+ if (!tx_ratio[board_idx])
+ writel(DEF_JUMBO_TX_RATIO, ®s->TxBufRat);
+ }
+ }
+}
+
+
+static void ace_watchdog(struct net_device *data)
+{
+ struct net_device *dev = data;
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+
+ /*
+ * We haven't received a stats update event for more than 2.5
+ * seconds and there is data in the transmit queue, thus we
+ * assume the card is stuck.
+ */
+ if (*ap->tx_csm != ap->tx_ret_csm) {
+ printk(KERN_WARNING "%s: Transmitter is stuck, %08x\n",
+ dev->name, (unsigned int)readl(®s->HostCtrl));
+ /* This can happen due to ieee flow control. */
+ } else {
+ printk(KERN_DEBUG "%s: BUG... transmitter died. Kicking it.\n",
+ dev->name);
+#if 0
+ netif_wake_queue(dev);
+#endif
+ }
+}
+
+
+static void ace_tasklet(unsigned long arg)
+{
+ struct net_device *dev = (struct net_device *) arg;
+ struct ace_private *ap = netdev_priv(dev);
+ int cur_size;
+
+ cur_size = atomic_read(&ap->cur_rx_bufs);
+ if ((cur_size < RX_LOW_STD_THRES) &&
+ !test_and_set_bit(0, &ap->std_refill_busy)) {
+#ifdef DEBUG
+ printk("refilling buffers (current %i)\n", cur_size);
+#endif
+ ace_load_std_rx_ring(dev, RX_RING_SIZE - cur_size);
+ }
+
+ if (ap->version >= 2) {
+ cur_size = atomic_read(&ap->cur_mini_bufs);
+ if ((cur_size < RX_LOW_MINI_THRES) &&
+ !test_and_set_bit(0, &ap->mini_refill_busy)) {
+#ifdef DEBUG
+ printk("refilling mini buffers (current %i)\n",
+ cur_size);
+#endif
+ ace_load_mini_rx_ring(dev, RX_MINI_SIZE - cur_size);
+ }
+ }
+
+ cur_size = atomic_read(&ap->cur_jumbo_bufs);
+ if (ap->jumbo && (cur_size < RX_LOW_JUMBO_THRES) &&
+ !test_and_set_bit(0, &ap->jumbo_refill_busy)) {
+#ifdef DEBUG
+ printk("refilling jumbo buffers (current %i)\n", cur_size);
+#endif
+ ace_load_jumbo_rx_ring(dev, RX_JUMBO_SIZE - cur_size);
+ }
+ ap->tasklet_pending = 0;
+}
+
+
+/*
+ * Copy the contents of the NIC's trace buffer to kernel memory.
+ */
+static void ace_dump_trace(struct ace_private *ap)
+{
+#if 0
+ if (!ap->trace_buf)
+ if (!(ap->trace_buf = kmalloc(ACE_TRACE_SIZE, GFP_KERNEL)))
+ return;
+#endif
+}
+
+
+/*
+ * Load the standard rx ring.
+ *
+ * Loading rings is safe without holding the spin lock since this is
+ * done only before the device is enabled, thus no interrupts are
+ * generated and by the interrupt handler/tasklet handler.
+ */
+static void ace_load_std_rx_ring(struct net_device *dev, int nr_bufs)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+ short i, idx;
+
+
+ prefetchw(&ap->cur_rx_bufs);
+
+ idx = ap->rx_std_skbprd;
+
+ for (i = 0; i < nr_bufs; i++) {
+ struct sk_buff *skb;
+ struct rx_desc *rd;
+ dma_addr_t mapping;
+
+ skb = netdev_alloc_skb_ip_align(dev, ACE_STD_BUFSIZE);
+ if (!skb)
+ break;
+
+ mapping = pci_map_page(ap->pdev, virt_to_page(skb->data),
+ offset_in_page(skb->data),
+ ACE_STD_BUFSIZE,
+ PCI_DMA_FROMDEVICE);
+ ap->skb->rx_std_skbuff[idx].skb = skb;
+ dma_unmap_addr_set(&ap->skb->rx_std_skbuff[idx],
+ mapping, mapping);
+
+ rd = &ap->rx_std_ring[idx];
+ set_aceaddr(&rd->addr, mapping);
+ rd->size = ACE_STD_BUFSIZE;
+ rd->idx = idx;
+ idx = (idx + 1) % RX_STD_RING_ENTRIES;
+ }
+
+ if (!i)
+ goto error_out;
+
+ atomic_add(i, &ap->cur_rx_bufs);
+ ap->rx_std_skbprd = idx;
+
+ if (ACE_IS_TIGON_I(ap)) {
+ struct cmd cmd;
+ cmd.evt = C_SET_RX_PRD_IDX;
+ cmd.code = 0;
+ cmd.idx = ap->rx_std_skbprd;
+ ace_issue_cmd(regs, &cmd);
+ } else {
+ writel(idx, ®s->RxStdPrd);
+ wmb();
+ }
+
+ out:
+ clear_bit(0, &ap->std_refill_busy);
+ return;
+
+ error_out:
+ printk(KERN_INFO "Out of memory when allocating "
+ "standard receive buffers\n");
+ goto out;
+}
+
+
+static void ace_load_mini_rx_ring(struct net_device *dev, int nr_bufs)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+ short i, idx;
+
+ prefetchw(&ap->cur_mini_bufs);
+
+ idx = ap->rx_mini_skbprd;
+ for (i = 0; i < nr_bufs; i++) {
+ struct sk_buff *skb;
+ struct rx_desc *rd;
+ dma_addr_t mapping;
+
+ skb = netdev_alloc_skb_ip_align(dev, ACE_MINI_BUFSIZE);
+ if (!skb)
+ break;
+
+ mapping = pci_map_page(ap->pdev, virt_to_page(skb->data),
+ offset_in_page(skb->data),
+ ACE_MINI_BUFSIZE,
+ PCI_DMA_FROMDEVICE);
+ ap->skb->rx_mini_skbuff[idx].skb = skb;
+ dma_unmap_addr_set(&ap->skb->rx_mini_skbuff[idx],
+ mapping, mapping);
+
+ rd = &ap->rx_mini_ring[idx];
+ set_aceaddr(&rd->addr, mapping);
+ rd->size = ACE_MINI_BUFSIZE;
+ rd->idx = idx;
+ idx = (idx + 1) % RX_MINI_RING_ENTRIES;
+ }
+
+ if (!i)
+ goto error_out;
+
+ atomic_add(i, &ap->cur_mini_bufs);
+
+ ap->rx_mini_skbprd = idx;
+
+ writel(idx, ®s->RxMiniPrd);
+ wmb();
+
+ out:
+ clear_bit(0, &ap->mini_refill_busy);
+ return;
+ error_out:
+ printk(KERN_INFO "Out of memory when allocating "
+ "mini receive buffers\n");
+ goto out;
+}
+
+
+/*
+ * Load the jumbo rx ring, this may happen at any time if the MTU
+ * is changed to a value > 1500.
+ */
+static void ace_load_jumbo_rx_ring(struct net_device *dev, int nr_bufs)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+ short i, idx;
+
+ idx = ap->rx_jumbo_skbprd;
+
+ for (i = 0; i < nr_bufs; i++) {
+ struct sk_buff *skb;
+ struct rx_desc *rd;
+ dma_addr_t mapping;
+
+ skb = netdev_alloc_skb_ip_align(dev, ACE_JUMBO_BUFSIZE);
+ if (!skb)
+ break;
+
+ mapping = pci_map_page(ap->pdev, virt_to_page(skb->data),
+ offset_in_page(skb->data),
+ ACE_JUMBO_BUFSIZE,
+ PCI_DMA_FROMDEVICE);
+ ap->skb->rx_jumbo_skbuff[idx].skb = skb;
+ dma_unmap_addr_set(&ap->skb->rx_jumbo_skbuff[idx],
+ mapping, mapping);
+
+ rd = &ap->rx_jumbo_ring[idx];
+ set_aceaddr(&rd->addr, mapping);
+ rd->size = ACE_JUMBO_BUFSIZE;
+ rd->idx = idx;
+ idx = (idx + 1) % RX_JUMBO_RING_ENTRIES;
+ }
+
+ if (!i)
+ goto error_out;
+
+ atomic_add(i, &ap->cur_jumbo_bufs);
+ ap->rx_jumbo_skbprd = idx;
+
+ if (ACE_IS_TIGON_I(ap)) {
+ struct cmd cmd;
+ cmd.evt = C_SET_RX_JUMBO_PRD_IDX;
+ cmd.code = 0;
+ cmd.idx = ap->rx_jumbo_skbprd;
+ ace_issue_cmd(regs, &cmd);
+ } else {
+ writel(idx, ®s->RxJumboPrd);
+ wmb();
+ }
+
+ out:
+ clear_bit(0, &ap->jumbo_refill_busy);
+ return;
+ error_out:
+ if (net_ratelimit())
+ printk(KERN_INFO "Out of memory when allocating "
+ "jumbo receive buffers\n");
+ goto out;
+}
+
+
+/*
+ * All events are considered to be slow (RX/TX ints do not generate
+ * events) and are handled here, outside the main interrupt handler,
+ * to reduce the size of the handler.
+ */
+static u32 ace_handle_event(struct net_device *dev, u32 evtcsm, u32 evtprd)
+{
+ struct ace_private *ap;
+
+ ap = netdev_priv(dev);
+
+ while (evtcsm != evtprd) {
+ switch (ap->evt_ring[evtcsm].evt) {
+ case E_FW_RUNNING:
+ printk(KERN_INFO "%s: Firmware up and running\n",
+ ap->name);
+ ap->fw_running = 1;
+ wmb();
+ break;
+ case E_STATS_UPDATED:
+ break;
+ case E_LNK_STATE:
+ {
+ u16 code = ap->evt_ring[evtcsm].code;
+ switch (code) {
+ case E_C_LINK_UP:
+ {
+ u32 state = readl(&ap->regs->GigLnkState);
+ printk(KERN_WARNING "%s: Optical link UP "
+ "(%s Duplex, Flow Control: %s%s)\n",
+ ap->name,
+ state & LNK_FULL_DUPLEX ? "Full":"Half",
+ state & LNK_TX_FLOW_CTL_Y ? "TX " : "",
+ state & LNK_RX_FLOW_CTL_Y ? "RX" : "");
+ break;
+ }
+ case E_C_LINK_DOWN:
+ printk(KERN_WARNING "%s: Optical link DOWN\n",
+ ap->name);
+ break;
+ case E_C_LINK_10_100:
+ printk(KERN_WARNING "%s: 10/100BaseT link "
+ "UP\n", ap->name);
+ break;
+ default:
+ printk(KERN_ERR "%s: Unknown optical link "
+ "state %02x\n", ap->name, code);
+ }
+ break;
+ }
+ case E_ERROR:
+ switch(ap->evt_ring[evtcsm].code) {
+ case E_C_ERR_INVAL_CMD:
+ printk(KERN_ERR "%s: invalid command error\n",
+ ap->name);
+ break;
+ case E_C_ERR_UNIMP_CMD:
+ printk(KERN_ERR "%s: unimplemented command "
+ "error\n", ap->name);
+ break;
+ case E_C_ERR_BAD_CFG:
+ printk(KERN_ERR "%s: bad config error\n",
+ ap->name);
+ break;
+ default:
+ printk(KERN_ERR "%s: unknown error %02x\n",
+ ap->name, ap->evt_ring[evtcsm].code);
+ }
+ break;
+ case E_RESET_JUMBO_RNG:
+ {
+ int i;
+ for (i = 0; i < RX_JUMBO_RING_ENTRIES; i++) {
+ if (ap->skb->rx_jumbo_skbuff[i].skb) {
+ ap->rx_jumbo_ring[i].size = 0;
+ set_aceaddr(&ap->rx_jumbo_ring[i].addr, 0);
+ dev_kfree_skb(ap->skb->rx_jumbo_skbuff[i].skb);
+ ap->skb->rx_jumbo_skbuff[i].skb = NULL;
+ }
+ }
+
+ if (ACE_IS_TIGON_I(ap)) {
+ struct cmd cmd;
+ cmd.evt = C_SET_RX_JUMBO_PRD_IDX;
+ cmd.code = 0;
+ cmd.idx = 0;
+ ace_issue_cmd(ap->regs, &cmd);
+ } else {
+ writel(0, &((ap->regs)->RxJumboPrd));
+ wmb();
+ }
+
+ ap->jumbo = 0;
+ ap->rx_jumbo_skbprd = 0;
+ printk(KERN_INFO "%s: Jumbo ring flushed\n",
+ ap->name);
+ clear_bit(0, &ap->jumbo_refill_busy);
+ break;
+ }
+ default:
+ printk(KERN_ERR "%s: Unhandled event 0x%02x\n",
+ ap->name, ap->evt_ring[evtcsm].evt);
+ }
+ evtcsm = (evtcsm + 1) % EVT_RING_ENTRIES;
+ }
+
+ return evtcsm;
+}
+
+
+static void ace_rx_int(struct net_device *dev, u32 rxretprd, u32 rxretcsm)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ u32 idx;
+ int mini_count = 0, std_count = 0;
+
+ idx = rxretcsm;
+
+ prefetchw(&ap->cur_rx_bufs);
+ prefetchw(&ap->cur_mini_bufs);
+
+ while (idx != rxretprd) {
+ struct ring_info *rip;
+ struct sk_buff *skb;
+ struct rx_desc *rxdesc, *retdesc;
+ u32 skbidx;
+ int bd_flags, desc_type, mapsize;
+ u16 csum;
+
+
+ /* make sure the rx descriptor isn't read before rxretprd */
+ if (idx == rxretcsm)
+ rmb();
+
+ retdesc = &ap->rx_return_ring[idx];
+ skbidx = retdesc->idx;
+ bd_flags = retdesc->flags;
+ desc_type = bd_flags & (BD_FLG_JUMBO | BD_FLG_MINI);
+
+ switch(desc_type) {
+ /*
+ * Normal frames do not have any flags set
+ *
+ * Mini and normal frames arrive frequently,
+ * so use a local counter to avoid doing
+ * atomic operations for each packet arriving.
+ */
+ case 0:
+ rip = &ap->skb->rx_std_skbuff[skbidx];
+ mapsize = ACE_STD_BUFSIZE;
+ rxdesc = &ap->rx_std_ring[skbidx];
+ std_count++;
+ break;
+ case BD_FLG_JUMBO:
+ rip = &ap->skb->rx_jumbo_skbuff[skbidx];
+ mapsize = ACE_JUMBO_BUFSIZE;
+ rxdesc = &ap->rx_jumbo_ring[skbidx];
+ atomic_dec(&ap->cur_jumbo_bufs);
+ break;
+ case BD_FLG_MINI:
+ rip = &ap->skb->rx_mini_skbuff[skbidx];
+ mapsize = ACE_MINI_BUFSIZE;
+ rxdesc = &ap->rx_mini_ring[skbidx];
+ mini_count++;
+ break;
+ default:
+ printk(KERN_INFO "%s: unknown frame type (0x%02x) "
+ "returned by NIC\n", dev->name,
+ retdesc->flags);
+ goto error;
+ }
+
+ skb = rip->skb;
+ rip->skb = NULL;
+ pci_unmap_page(ap->pdev,
+ dma_unmap_addr(rip, mapping),
+ mapsize,
+ PCI_DMA_FROMDEVICE);
+ skb_put(skb, retdesc->size);
+
+ /*
+ * Fly baby, fly!
+ */
+ csum = retdesc->tcp_udp_csum;
+
+ skb->protocol = eth_type_trans(skb, dev);
+
+ /*
+ * Instead of forcing the poor tigon mips cpu to calculate
+ * pseudo hdr checksum, we do this ourselves.
+ */
+ if (bd_flags & BD_FLG_TCP_UDP_SUM) {
+ skb->csum = htons(csum);
+ skb->ip_summed = CHECKSUM_COMPLETE;
+ } else {
+ skb_checksum_none_assert(skb);
+ }
+
+ /* send it up */
+ if ((bd_flags & BD_FLG_VLAN_TAG))
+ __vlan_hwaccel_put_tag(skb, retdesc->vlan);
+ netif_rx(skb);
+
+ dev->stats.rx_packets++;
+ dev->stats.rx_bytes += retdesc->size;
+
+ idx = (idx + 1) % RX_RETURN_RING_ENTRIES;
+ }
+
+ atomic_sub(std_count, &ap->cur_rx_bufs);
+ if (!ACE_IS_TIGON_I(ap))
+ atomic_sub(mini_count, &ap->cur_mini_bufs);
+
+ out:
+ /*
+ * According to the documentation RxRetCsm is obsolete with
+ * the 12.3.x Firmware - my Tigon I NICs seem to disagree!
+ */
+ if (ACE_IS_TIGON_I(ap)) {
+ writel(idx, &ap->regs->RxRetCsm);
+ }
+ ap->cur_rx = idx;
+
+ return;
+ error:
+ idx = rxretprd;
+ goto out;
+}
+
+
+static inline void ace_tx_int(struct net_device *dev,
+ u32 txcsm, u32 idx)
+{
+ struct ace_private *ap = netdev_priv(dev);
+
+ do {
+ struct sk_buff *skb;
+ struct tx_ring_info *info;
+
+ info = ap->skb->tx_skbuff + idx;
+ skb = info->skb;
+
+ if (dma_unmap_len(info, maplen)) {
+ pci_unmap_page(ap->pdev, dma_unmap_addr(info, mapping),
+ dma_unmap_len(info, maplen),
+ PCI_DMA_TODEVICE);
+ dma_unmap_len_set(info, maplen, 0);
+ }
+
+ if (skb) {
+ dev->stats.tx_packets++;
+ dev->stats.tx_bytes += skb->len;
+ dev_kfree_skb_irq(skb);
+ info->skb = NULL;
+ }
+
+ idx = (idx + 1) % ACE_TX_RING_ENTRIES(ap);
+ } while (idx != txcsm);
+
+ if (netif_queue_stopped(dev))
+ netif_wake_queue(dev);
+
+ wmb();
+ ap->tx_ret_csm = txcsm;
+
+ /* So... tx_ret_csm is advanced _after_ check for device wakeup.
+ *
+ * We could try to make it before. In this case we would get
+ * the following race condition: hard_start_xmit on other cpu
+ * enters after we advanced tx_ret_csm and fills space,
+ * which we have just freed, so that we make illegal device wakeup.
+ * There is no good way to workaround this (at entry
+ * to ace_start_xmit detects this condition and prevents
+ * ring corruption, but it is not a good workaround.)
+ *
+ * When tx_ret_csm is advanced after, we wake up device _only_
+ * if we really have some space in ring (though the core doing
+ * hard_start_xmit can see full ring for some period and has to
+ * synchronize.) Superb.
+ * BUT! We get another subtle race condition. hard_start_xmit
+ * may think that ring is full between wakeup and advancing
+ * tx_ret_csm and will stop device instantly! It is not so bad.
+ * We are guaranteed that there is something in ring, so that
+ * the next irq will resume transmission. To speedup this we could
+ * mark descriptor, which closes ring with BD_FLG_COAL_NOW
+ * (see ace_start_xmit).
+ *
+ * Well, this dilemma exists in all lock-free devices.
+ * We, following scheme used in drivers by Donald Becker,
+ * select the least dangerous.
+ * --ANK
+ */
+}
+
+
+static irqreturn_t ace_interrupt(int irq, void *dev_id)
+{
+ struct net_device *dev = (struct net_device *)dev_id;
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+ u32 idx;
+ u32 txcsm, rxretcsm, rxretprd;
+ u32 evtcsm, evtprd;
+
+ /*
+ * In case of PCI shared interrupts or spurious interrupts,
+ * we want to make sure it is actually our interrupt before
+ * spending any time in here.
+ */
+ if (!(readl(®s->HostCtrl) & IN_INT))
+ return IRQ_NONE;
+
+ /*
+ * ACK intr now. Otherwise we will lose updates to rx_ret_prd,
+ * which happened _after_ rxretprd = *ap->rx_ret_prd; but before
+ * writel(0, ®s->Mb0Lo).
+ *
+ * "IRQ avoidance" recommended in docs applies to IRQs served
+ * threads and it is wrong even for that case.
+ */
+ writel(0, ®s->Mb0Lo);
+ readl(®s->Mb0Lo);
+
+ /*
+ * There is no conflict between transmit handling in
+ * start_xmit and receive processing, thus there is no reason
+ * to take a spin lock for RX handling. Wait until we start
+ * working on the other stuff - hey we don't need a spin lock
+ * anymore.
+ */
+ rxretprd = *ap->rx_ret_prd;
+ rxretcsm = ap->cur_rx;
+
+ if (rxretprd != rxretcsm)
+ ace_rx_int(dev, rxretprd, rxretcsm);
+
+ txcsm = *ap->tx_csm;
+ idx = ap->tx_ret_csm;
+
+ if (txcsm != idx) {
+ /*
+ * If each skb takes only one descriptor this check degenerates
+ * to identity, because new space has just been opened.
+ * But if skbs are fragmented we must check that this index
+ * update releases enough of space, otherwise we just
+ * wait for device to make more work.
+ */
+ if (!tx_ring_full(ap, txcsm, ap->tx_prd))
+ ace_tx_int(dev, txcsm, idx);
+ }
+
+ evtcsm = readl(®s->EvtCsm);
+ evtprd = *ap->evt_prd;
+
+ if (evtcsm != evtprd) {
+ evtcsm = ace_handle_event(dev, evtcsm, evtprd);
+ writel(evtcsm, ®s->EvtCsm);
+ }
+
+ /*
+ * This has to go last in the interrupt handler and run with
+ * the spin lock released ... what lock?
+ */
+ if (netif_running(dev)) {
+ int cur_size;
+ int run_tasklet = 0;
+
+ cur_size = atomic_read(&ap->cur_rx_bufs);
+ if (cur_size < RX_LOW_STD_THRES) {
+ if ((cur_size < RX_PANIC_STD_THRES) &&
+ !test_and_set_bit(0, &ap->std_refill_busy)) {
+#ifdef DEBUG
+ printk("low on std buffers %i\n", cur_size);
+#endif
+ ace_load_std_rx_ring(dev,
+ RX_RING_SIZE - cur_size);
+ } else
+ run_tasklet = 1;
+ }
+
+ if (!ACE_IS_TIGON_I(ap)) {
+ cur_size = atomic_read(&ap->cur_mini_bufs);
+ if (cur_size < RX_LOW_MINI_THRES) {
+ if ((cur_size < RX_PANIC_MINI_THRES) &&
+ !test_and_set_bit(0,
+ &ap->mini_refill_busy)) {
+#ifdef DEBUG
+ printk("low on mini buffers %i\n",
+ cur_size);
+#endif
+ ace_load_mini_rx_ring(dev,
+ RX_MINI_SIZE - cur_size);
+ } else
+ run_tasklet = 1;
+ }
+ }
+
+ if (ap->jumbo) {
+ cur_size = atomic_read(&ap->cur_jumbo_bufs);
+ if (cur_size < RX_LOW_JUMBO_THRES) {
+ if ((cur_size < RX_PANIC_JUMBO_THRES) &&
+ !test_and_set_bit(0,
+ &ap->jumbo_refill_busy)){
+#ifdef DEBUG
+ printk("low on jumbo buffers %i\n",
+ cur_size);
+#endif
+ ace_load_jumbo_rx_ring(dev,
+ RX_JUMBO_SIZE - cur_size);
+ } else
+ run_tasklet = 1;
+ }
+ }
+ if (run_tasklet && !ap->tasklet_pending) {
+ ap->tasklet_pending = 1;
+ tasklet_schedule(&ap->ace_tasklet);
+ }
+ }
+
+ return IRQ_HANDLED;
+}
+
+static int ace_open(struct net_device *dev)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+ struct cmd cmd;
+
+ if (!(ap->fw_running)) {
+ printk(KERN_WARNING "%s: Firmware not running!\n", dev->name);
+ return -EBUSY;
+ }
+
+ writel(dev->mtu + ETH_HLEN + 4, ®s->IfMtu);
+
+ cmd.evt = C_CLEAR_STATS;
+ cmd.code = 0;
+ cmd.idx = 0;
+ ace_issue_cmd(regs, &cmd);
+
+ cmd.evt = C_HOST_STATE;
+ cmd.code = C_C_STACK_UP;
+ cmd.idx = 0;
+ ace_issue_cmd(regs, &cmd);
+
+ if (ap->jumbo &&
+ !test_and_set_bit(0, &ap->jumbo_refill_busy))
+ ace_load_jumbo_rx_ring(dev, RX_JUMBO_SIZE);
+
+ if (dev->flags & IFF_PROMISC) {
+ cmd.evt = C_SET_PROMISC_MODE;
+ cmd.code = C_C_PROMISC_ENABLE;
+ cmd.idx = 0;
+ ace_issue_cmd(regs, &cmd);
+
+ ap->promisc = 1;
+ }else
+ ap->promisc = 0;
+ ap->mcast_all = 0;
+
+#if 0
+ cmd.evt = C_LNK_NEGOTIATION;
+ cmd.code = 0;
+ cmd.idx = 0;
+ ace_issue_cmd(regs, &cmd);
+#endif
+
+ netif_start_queue(dev);
+
+ /*
+ * Setup the bottom half rx ring refill handler
+ */
+ tasklet_init(&ap->ace_tasklet, ace_tasklet, (unsigned long)dev);
+ return 0;
+}
+
+
+static int ace_close(struct net_device *dev)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+ struct cmd cmd;
+ unsigned long flags;
+ short i;
+
+ /*
+ * Without (or before) releasing irq and stopping hardware, this
+ * is an absolute non-sense, by the way. It will be reset instantly
+ * by the first irq.
+ */
+ netif_stop_queue(dev);
+
+
+ if (ap->promisc) {
+ cmd.evt = C_SET_PROMISC_MODE;
+ cmd.code = C_C_PROMISC_DISABLE;
+ cmd.idx = 0;
+ ace_issue_cmd(regs, &cmd);
+ ap->promisc = 0;
+ }
+
+ cmd.evt = C_HOST_STATE;
+ cmd.code = C_C_STACK_DOWN;
+ cmd.idx = 0;
+ ace_issue_cmd(regs, &cmd);
+
+ tasklet_kill(&ap->ace_tasklet);
+
+ /*
+ * Make sure one CPU is not processing packets while
+ * buffers are being released by another.
+ */
+
+ local_irq_save(flags);
+ ace_mask_irq(dev);
+
+ for (i = 0; i < ACE_TX_RING_ENTRIES(ap); i++) {
+ struct sk_buff *skb;
+ struct tx_ring_info *info;
+
+ info = ap->skb->tx_skbuff + i;
+ skb = info->skb;
+
+ if (dma_unmap_len(info, maplen)) {
+ if (ACE_IS_TIGON_I(ap)) {
+ /* NB: TIGON_1 is special, tx_ring is in io space */
+ struct tx_desc __iomem *tx;
+ tx = (__force struct tx_desc __iomem *) &ap->tx_ring[i];
+ writel(0, &tx->addr.addrhi);
+ writel(0, &tx->addr.addrlo);
+ writel(0, &tx->flagsize);
+ } else
+ memset(ap->tx_ring + i, 0,
+ sizeof(struct tx_desc));
+ pci_unmap_page(ap->pdev, dma_unmap_addr(info, mapping),
+ dma_unmap_len(info, maplen),
+ PCI_DMA_TODEVICE);
+ dma_unmap_len_set(info, maplen, 0);
+ }
+ if (skb) {
+ dev_kfree_skb(skb);
+ info->skb = NULL;
+ }
+ }
+
+ if (ap->jumbo) {
+ cmd.evt = C_RESET_JUMBO_RNG;
+ cmd.code = 0;
+ cmd.idx = 0;
+ ace_issue_cmd(regs, &cmd);
+ }
+
+ ace_unmask_irq(dev);
+ local_irq_restore(flags);
+
+ return 0;
+}
+
+
+static inline dma_addr_t
+ace_map_tx_skb(struct ace_private *ap, struct sk_buff *skb,
+ struct sk_buff *tail, u32 idx)
+{
+ dma_addr_t mapping;
+ struct tx_ring_info *info;
+
+ mapping = pci_map_page(ap->pdev, virt_to_page(skb->data),
+ offset_in_page(skb->data),
+ skb->len, PCI_DMA_TODEVICE);
+
+ info = ap->skb->tx_skbuff + idx;
+ info->skb = tail;
+ dma_unmap_addr_set(info, mapping, mapping);
+ dma_unmap_len_set(info, maplen, skb->len);
+ return mapping;
+}
+
+
+static inline void
+ace_load_tx_bd(struct ace_private *ap, struct tx_desc *desc, u64 addr,
+ u32 flagsize, u32 vlan_tag)
+{
+#if !USE_TX_COAL_NOW
+ flagsize &= ~BD_FLG_COAL_NOW;
+#endif
+
+ if (ACE_IS_TIGON_I(ap)) {
+ struct tx_desc __iomem *io = (__force struct tx_desc __iomem *) desc;
+ writel(addr >> 32, &io->addr.addrhi);
+ writel(addr & 0xffffffff, &io->addr.addrlo);
+ writel(flagsize, &io->flagsize);
+ writel(vlan_tag, &io->vlanres);
+ } else {
+ desc->addr.addrhi = addr >> 32;
+ desc->addr.addrlo = addr;
+ desc->flagsize = flagsize;
+ desc->vlanres = vlan_tag;
+ }
+}
+
+
+static netdev_tx_t ace_start_xmit(struct sk_buff *skb,
+ struct net_device *dev)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+ struct tx_desc *desc;
+ u32 idx, flagsize;
+ unsigned long maxjiff = jiffies + 3*HZ;
+
+restart:
+ idx = ap->tx_prd;
+
+ if (tx_ring_full(ap, ap->tx_ret_csm, idx))
+ goto overflow;
+
+ if (!skb_shinfo(skb)->nr_frags) {
+ dma_addr_t mapping;
+ u32 vlan_tag = 0;
+
+ mapping = ace_map_tx_skb(ap, skb, skb, idx);
+ flagsize = (skb->len << 16) | (BD_FLG_END);
+ if (skb->ip_summed == CHECKSUM_PARTIAL)
+ flagsize |= BD_FLG_TCP_UDP_SUM;
+ if (vlan_tx_tag_present(skb)) {
+ flagsize |= BD_FLG_VLAN_TAG;
+ vlan_tag = vlan_tx_tag_get(skb);
+ }
+ desc = ap->tx_ring + idx;
+ idx = (idx + 1) % ACE_TX_RING_ENTRIES(ap);
+
+ /* Look at ace_tx_int for explanations. */
+ if (tx_ring_full(ap, ap->tx_ret_csm, idx))
+ flagsize |= BD_FLG_COAL_NOW;
+
+ ace_load_tx_bd(ap, desc, mapping, flagsize, vlan_tag);
+ } else {
+ dma_addr_t mapping;
+ u32 vlan_tag = 0;
+ int i, len = 0;
+
+ mapping = ace_map_tx_skb(ap, skb, NULL, idx);
+ flagsize = (skb_headlen(skb) << 16);
+ if (skb->ip_summed == CHECKSUM_PARTIAL)
+ flagsize |= BD_FLG_TCP_UDP_SUM;
+ if (vlan_tx_tag_present(skb)) {
+ flagsize |= BD_FLG_VLAN_TAG;
+ vlan_tag = vlan_tx_tag_get(skb);
+ }
+
+ ace_load_tx_bd(ap, ap->tx_ring + idx, mapping, flagsize, vlan_tag);
+
+ idx = (idx + 1) % ACE_TX_RING_ENTRIES(ap);
+
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ struct tx_ring_info *info;
+
+ len += frag->size;
+ info = ap->skb->tx_skbuff + idx;
+ desc = ap->tx_ring + idx;
+
+ mapping = pci_map_page(ap->pdev, frag->page,
+ frag->page_offset, frag->size,
+ PCI_DMA_TODEVICE);
+
+ flagsize = (frag->size << 16);
+ if (skb->ip_summed == CHECKSUM_PARTIAL)
+ flagsize |= BD_FLG_TCP_UDP_SUM;
+ idx = (idx + 1) % ACE_TX_RING_ENTRIES(ap);
+
+ if (i == skb_shinfo(skb)->nr_frags - 1) {
+ flagsize |= BD_FLG_END;
+ if (tx_ring_full(ap, ap->tx_ret_csm, idx))
+ flagsize |= BD_FLG_COAL_NOW;
+
+ /*
+ * Only the last fragment frees
+ * the skb!
+ */
+ info->skb = skb;
+ } else {
+ info->skb = NULL;
+ }
+ dma_unmap_addr_set(info, mapping, mapping);
+ dma_unmap_len_set(info, maplen, frag->size);
+ ace_load_tx_bd(ap, desc, mapping, flagsize, vlan_tag);
+ }
+ }
+
+ wmb();
+ ap->tx_prd = idx;
+ ace_set_txprd(regs, ap, idx);
+
+ if (flagsize & BD_FLG_COAL_NOW) {
+ netif_stop_queue(dev);
+
+ /*
+ * A TX-descriptor producer (an IRQ) might have gotten
+ * between, making the ring free again. Since xmit is
+ * serialized, this is the only situation we have to
+ * re-test.
+ */
+ if (!tx_ring_full(ap, ap->tx_ret_csm, idx))
+ netif_wake_queue(dev);
+ }
+
+ return NETDEV_TX_OK;
+
+overflow:
+ /*
+ * This race condition is unavoidable with lock-free drivers.
+ * We wake up the queue _before_ tx_prd is advanced, so that we can
+ * enter hard_start_xmit too early, while tx ring still looks closed.
+ * This happens ~1-4 times per 100000 packets, so that we can allow
+ * to loop syncing to other CPU. Probably, we need an additional
+ * wmb() in ace_tx_intr as well.
+ *
+ * Note that this race is relieved by reserving one more entry
+ * in tx ring than it is necessary (see original non-SG driver).
+ * However, with SG we need to reserve 2*MAX_SKB_FRAGS+1, which
+ * is already overkill.
+ *
+ * Alternative is to return with 1 not throttling queue. In this
+ * case loop becomes longer, no more useful effects.
+ */
+ if (time_before(jiffies, maxjiff)) {
+ barrier();
+ cpu_relax();
+ goto restart;
+ }
+
+ /* The ring is stuck full. */
+ printk(KERN_WARNING "%s: Transmit ring stuck full\n", dev->name);
+ return NETDEV_TX_BUSY;
+}
+
+
+static int ace_change_mtu(struct net_device *dev, int new_mtu)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+
+ if (new_mtu > ACE_JUMBO_MTU)
+ return -EINVAL;
+
+ writel(new_mtu + ETH_HLEN + 4, ®s->IfMtu);
+ dev->mtu = new_mtu;
+
+ if (new_mtu > ACE_STD_MTU) {
+ if (!(ap->jumbo)) {
+ printk(KERN_INFO "%s: Enabling Jumbo frame "
+ "support\n", dev->name);
+ ap->jumbo = 1;
+ if (!test_and_set_bit(0, &ap->jumbo_refill_busy))
+ ace_load_jumbo_rx_ring(dev, RX_JUMBO_SIZE);
+ ace_set_rxtx_parms(dev, 1);
+ }
+ } else {
+ while (test_and_set_bit(0, &ap->jumbo_refill_busy));
+ ace_sync_irq(dev->irq);
+ ace_set_rxtx_parms(dev, 0);
+ if (ap->jumbo) {
+ struct cmd cmd;
+
+ cmd.evt = C_RESET_JUMBO_RNG;
+ cmd.code = 0;
+ cmd.idx = 0;
+ ace_issue_cmd(regs, &cmd);
+ }
+ }
+
+ return 0;
+}
+
+static int ace_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+ u32 link;
+
+ memset(ecmd, 0, sizeof(struct ethtool_cmd));
+ ecmd->supported =
+ (SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full |
+ SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full |
+ SUPPORTED_1000baseT_Half | SUPPORTED_1000baseT_Full |
+ SUPPORTED_Autoneg | SUPPORTED_FIBRE);
+
+ ecmd->port = PORT_FIBRE;
+ ecmd->transceiver = XCVR_INTERNAL;
+
+ link = readl(®s->GigLnkState);
+ if (link & LNK_1000MB)
+ ethtool_cmd_speed_set(ecmd, SPEED_1000);
+ else {
+ link = readl(®s->FastLnkState);
+ if (link & LNK_100MB)
+ ethtool_cmd_speed_set(ecmd, SPEED_100);
+ else if (link & LNK_10MB)
+ ethtool_cmd_speed_set(ecmd, SPEED_10);
+ else
+ ethtool_cmd_speed_set(ecmd, 0);
+ }
+ if (link & LNK_FULL_DUPLEX)
+ ecmd->duplex = DUPLEX_FULL;
+ else
+ ecmd->duplex = DUPLEX_HALF;
+
+ if (link & LNK_NEGOTIATE)
+ ecmd->autoneg = AUTONEG_ENABLE;
+ else
+ ecmd->autoneg = AUTONEG_DISABLE;
+
+#if 0
+ /*
+ * Current struct ethtool_cmd is insufficient
+ */
+ ecmd->trace = readl(®s->TuneTrace);
+
+ ecmd->txcoal = readl(®s->TuneTxCoalTicks);
+ ecmd->rxcoal = readl(®s->TuneRxCoalTicks);
+#endif
+ ecmd->maxtxpkt = readl(®s->TuneMaxTxDesc);
+ ecmd->maxrxpkt = readl(®s->TuneMaxRxDesc);
+
+ return 0;
+}
+
+static int ace_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+ u32 link, speed;
+
+ link = readl(®s->GigLnkState);
+ if (link & LNK_1000MB)
+ speed = SPEED_1000;
+ else {
+ link = readl(®s->FastLnkState);
+ if (link & LNK_100MB)
+ speed = SPEED_100;
+ else if (link & LNK_10MB)
+ speed = SPEED_10;
+ else
+ speed = SPEED_100;
+ }
+
+ link = LNK_ENABLE | LNK_1000MB | LNK_100MB | LNK_10MB |
+ LNK_RX_FLOW_CTL_Y | LNK_NEG_FCTL;
+ if (!ACE_IS_TIGON_I(ap))
+ link |= LNK_TX_FLOW_CTL_Y;
+ if (ecmd->autoneg == AUTONEG_ENABLE)
+ link |= LNK_NEGOTIATE;
+ if (ethtool_cmd_speed(ecmd) != speed) {
+ link &= ~(LNK_1000MB | LNK_100MB | LNK_10MB);
+ switch (ethtool_cmd_speed(ecmd)) {
+ case SPEED_1000:
+ link |= LNK_1000MB;
+ break;
+ case SPEED_100:
+ link |= LNK_100MB;
+ break;
+ case SPEED_10:
+ link |= LNK_10MB;
+ break;
+ }
+ }
+
+ if (ecmd->duplex == DUPLEX_FULL)
+ link |= LNK_FULL_DUPLEX;
+
+ if (link != ap->link) {
+ struct cmd cmd;
+ printk(KERN_INFO "%s: Renegotiating link state\n",
+ dev->name);
+
+ ap->link = link;
+ writel(link, ®s->TuneLink);
+ if (!ACE_IS_TIGON_I(ap))
+ writel(link, ®s->TuneFastLink);
+ wmb();
+
+ cmd.evt = C_LNK_NEGOTIATION;
+ cmd.code = 0;
+ cmd.idx = 0;
+ ace_issue_cmd(regs, &cmd);
+ }
+ return 0;
+}
+
+static void ace_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ struct ace_private *ap = netdev_priv(dev);
+
+ strlcpy(info->driver, "acenic", sizeof(info->driver));
+ snprintf(info->version, sizeof(info->version), "%i.%i.%i",
+ ap->firmware_major, ap->firmware_minor,
+ ap->firmware_fix);
+
+ if (ap->pdev)
+ strlcpy(info->bus_info, pci_name(ap->pdev),
+ sizeof(info->bus_info));
+
+}
+
+/*
+ * Set the hardware MAC address.
+ */
+static int ace_set_mac_addr(struct net_device *dev, void *p)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+ struct sockaddr *addr=p;
+ u8 *da;
+ struct cmd cmd;
+
+ if(netif_running(dev))
+ return -EBUSY;
+
+ memcpy(dev->dev_addr, addr->sa_data,dev->addr_len);
+
+ da = (u8 *)dev->dev_addr;
+
+ writel(da[0] << 8 | da[1], ®s->MacAddrHi);
+ writel((da[2] << 24) | (da[3] << 16) | (da[4] << 8) | da[5],
+ ®s->MacAddrLo);
+
+ cmd.evt = C_SET_MAC_ADDR;
+ cmd.code = 0;
+ cmd.idx = 0;
+ ace_issue_cmd(regs, &cmd);
+
+ return 0;
+}
+
+
+static void ace_set_multicast_list(struct net_device *dev)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+ struct cmd cmd;
+
+ if ((dev->flags & IFF_ALLMULTI) && !(ap->mcast_all)) {
+ cmd.evt = C_SET_MULTICAST_MODE;
+ cmd.code = C_C_MCAST_ENABLE;
+ cmd.idx = 0;
+ ace_issue_cmd(regs, &cmd);
+ ap->mcast_all = 1;
+ } else if (ap->mcast_all) {
+ cmd.evt = C_SET_MULTICAST_MODE;
+ cmd.code = C_C_MCAST_DISABLE;
+ cmd.idx = 0;
+ ace_issue_cmd(regs, &cmd);
+ ap->mcast_all = 0;
+ }
+
+ if ((dev->flags & IFF_PROMISC) && !(ap->promisc)) {
+ cmd.evt = C_SET_PROMISC_MODE;
+ cmd.code = C_C_PROMISC_ENABLE;
+ cmd.idx = 0;
+ ace_issue_cmd(regs, &cmd);
+ ap->promisc = 1;
+ }else if (!(dev->flags & IFF_PROMISC) && (ap->promisc)) {
+ cmd.evt = C_SET_PROMISC_MODE;
+ cmd.code = C_C_PROMISC_DISABLE;
+ cmd.idx = 0;
+ ace_issue_cmd(regs, &cmd);
+ ap->promisc = 0;
+ }
+
+ /*
+ * For the time being multicast relies on the upper layers
+ * filtering it properly. The Firmware does not allow one to
+ * set the entire multicast list at a time and keeping track of
+ * it here is going to be messy.
+ */
+ if (!netdev_mc_empty(dev) && !ap->mcast_all) {
+ cmd.evt = C_SET_MULTICAST_MODE;
+ cmd.code = C_C_MCAST_ENABLE;
+ cmd.idx = 0;
+ ace_issue_cmd(regs, &cmd);
+ }else if (!ap->mcast_all) {
+ cmd.evt = C_SET_MULTICAST_MODE;
+ cmd.code = C_C_MCAST_DISABLE;
+ cmd.idx = 0;
+ ace_issue_cmd(regs, &cmd);
+ }
+}
+
+
+static struct net_device_stats *ace_get_stats(struct net_device *dev)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_mac_stats __iomem *mac_stats =
+ (struct ace_mac_stats __iomem *)ap->regs->Stats;
+
+ dev->stats.rx_missed_errors = readl(&mac_stats->drop_space);
+ dev->stats.multicast = readl(&mac_stats->kept_mc);
+ dev->stats.collisions = readl(&mac_stats->coll);
+
+ return &dev->stats;
+}
+
+
+static void __devinit ace_copy(struct ace_regs __iomem *regs, const __be32 *src,
+ u32 dest, int size)
+{
+ void __iomem *tdest;
+ short tsize, i;
+
+ if (size <= 0)
+ return;
+
+ while (size > 0) {
+ tsize = min_t(u32, ((~dest & (ACE_WINDOW_SIZE - 1)) + 1),
+ min_t(u32, size, ACE_WINDOW_SIZE));
+ tdest = (void __iomem *) ®s->Window +
+ (dest & (ACE_WINDOW_SIZE - 1));
+ writel(dest & ~(ACE_WINDOW_SIZE - 1), ®s->WinBase);
+ for (i = 0; i < (tsize / 4); i++) {
+ /* Firmware is big-endian */
+ writel(be32_to_cpup(src), tdest);
+ src++;
+ tdest += 4;
+ dest += 4;
+ size -= 4;
+ }
+ }
+}
+
+
+static void __devinit ace_clear(struct ace_regs __iomem *regs, u32 dest, int size)
+{
+ void __iomem *tdest;
+ short tsize = 0, i;
+
+ if (size <= 0)
+ return;
+
+ while (size > 0) {
+ tsize = min_t(u32, ((~dest & (ACE_WINDOW_SIZE - 1)) + 1),
+ min_t(u32, size, ACE_WINDOW_SIZE));
+ tdest = (void __iomem *) ®s->Window +
+ (dest & (ACE_WINDOW_SIZE - 1));
+ writel(dest & ~(ACE_WINDOW_SIZE - 1), ®s->WinBase);
+
+ for (i = 0; i < (tsize / 4); i++) {
+ writel(0, tdest + i*4);
+ }
+
+ dest += tsize;
+ size -= tsize;
+ }
+}
+
+
+/*
+ * Download the firmware into the SRAM on the NIC
+ *
+ * This operation requires the NIC to be halted and is performed with
+ * interrupts disabled and with the spinlock hold.
+ */
+static int __devinit ace_load_firmware(struct net_device *dev)
+{
+ const struct firmware *fw;
+ const char *fw_name = "acenic/tg2.bin";
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+ const __be32 *fw_data;
+ u32 load_addr;
+ int ret;
+
+ if (!(readl(®s->CpuCtrl) & CPU_HALTED)) {
+ printk(KERN_ERR "%s: trying to download firmware while the "
+ "CPU is running!\n", ap->name);
+ return -EFAULT;
+ }
+
+ if (ACE_IS_TIGON_I(ap))
+ fw_name = "acenic/tg1.bin";
+
+ ret = request_firmware(&fw, fw_name, &ap->pdev->dev);
+ if (ret) {
+ printk(KERN_ERR "%s: Failed to load firmware \"%s\"\n",
+ ap->name, fw_name);
+ return ret;
+ }
+
+ fw_data = (void *)fw->data;
+
+ /* Firmware blob starts with version numbers, followed by
+ load and start address. Remainder is the blob to be loaded
+ contiguously from load address. We don't bother to represent
+ the BSS/SBSS sections any more, since we were clearing the
+ whole thing anyway. */
+ ap->firmware_major = fw->data[0];
+ ap->firmware_minor = fw->data[1];
+ ap->firmware_fix = fw->data[2];
+
+ ap->firmware_start = be32_to_cpu(fw_data[1]);
+ if (ap->firmware_start < 0x4000 || ap->firmware_start >= 0x80000) {
+ printk(KERN_ERR "%s: bogus load address %08x in \"%s\"\n",
+ ap->name, ap->firmware_start, fw_name);
+ ret = -EINVAL;
+ goto out;
+ }
+
+ load_addr = be32_to_cpu(fw_data[2]);
+ if (load_addr < 0x4000 || load_addr >= 0x80000) {
+ printk(KERN_ERR "%s: bogus load address %08x in \"%s\"\n",
+ ap->name, load_addr, fw_name);
+ ret = -EINVAL;
+ goto out;
+ }
+
+ /*
+ * Do not try to clear more than 512KiB or we end up seeing
+ * funny things on NICs with only 512KiB SRAM
+ */
+ ace_clear(regs, 0x2000, 0x80000-0x2000);
+ ace_copy(regs, &fw_data[3], load_addr, fw->size-12);
+ out:
+ release_firmware(fw);
+ return ret;
+}
+
+
+/*
+ * The eeprom on the AceNIC is an Atmel i2c EEPROM.
+ *
+ * Accessing the EEPROM is `interesting' to say the least - don't read
+ * this code right after dinner.
+ *
+ * This is all about black magic and bit-banging the device .... I
+ * wonder in what hospital they have put the guy who designed the i2c
+ * specs.
+ *
+ * Oh yes, this is only the beginning!
+ *
+ * Thanks to Stevarino Webinski for helping tracking down the bugs in the
+ * code i2c readout code by beta testing all my hacks.
+ */
+static void __devinit eeprom_start(struct ace_regs __iomem *regs)
+{
+ u32 local;
+
+ readl(®s->LocalCtrl);
+ udelay(ACE_SHORT_DELAY);
+ local = readl(®s->LocalCtrl);
+ local |= EEPROM_DATA_OUT | EEPROM_WRITE_ENABLE;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+ udelay(ACE_SHORT_DELAY);
+ local |= EEPROM_CLK_OUT;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+ udelay(ACE_SHORT_DELAY);
+ local &= ~EEPROM_DATA_OUT;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+ udelay(ACE_SHORT_DELAY);
+ local &= ~EEPROM_CLK_OUT;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+}
+
+
+static void __devinit eeprom_prep(struct ace_regs __iomem *regs, u8 magic)
+{
+ short i;
+ u32 local;
+
+ udelay(ACE_SHORT_DELAY);
+ local = readl(®s->LocalCtrl);
+ local &= ~EEPROM_DATA_OUT;
+ local |= EEPROM_WRITE_ENABLE;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+
+ for (i = 0; i < 8; i++, magic <<= 1) {
+ udelay(ACE_SHORT_DELAY);
+ if (magic & 0x80)
+ local |= EEPROM_DATA_OUT;
+ else
+ local &= ~EEPROM_DATA_OUT;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+
+ udelay(ACE_SHORT_DELAY);
+ local |= EEPROM_CLK_OUT;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+ udelay(ACE_SHORT_DELAY);
+ local &= ~(EEPROM_CLK_OUT | EEPROM_DATA_OUT);
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+ }
+}
+
+
+static int __devinit eeprom_check_ack(struct ace_regs __iomem *regs)
+{
+ int state;
+ u32 local;
+
+ local = readl(®s->LocalCtrl);
+ local &= ~EEPROM_WRITE_ENABLE;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+ udelay(ACE_LONG_DELAY);
+ local |= EEPROM_CLK_OUT;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+ udelay(ACE_SHORT_DELAY);
+ /* sample data in middle of high clk */
+ state = (readl(®s->LocalCtrl) & EEPROM_DATA_IN) != 0;
+ udelay(ACE_SHORT_DELAY);
+ mb();
+ writel(readl(®s->LocalCtrl) & ~EEPROM_CLK_OUT, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+
+ return state;
+}
+
+
+static void __devinit eeprom_stop(struct ace_regs __iomem *regs)
+{
+ u32 local;
+
+ udelay(ACE_SHORT_DELAY);
+ local = readl(®s->LocalCtrl);
+ local |= EEPROM_WRITE_ENABLE;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+ udelay(ACE_SHORT_DELAY);
+ local &= ~EEPROM_DATA_OUT;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+ udelay(ACE_SHORT_DELAY);
+ local |= EEPROM_CLK_OUT;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+ udelay(ACE_SHORT_DELAY);
+ local |= EEPROM_DATA_OUT;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+ udelay(ACE_LONG_DELAY);
+ local &= ~EEPROM_CLK_OUT;
+ writel(local, ®s->LocalCtrl);
+ mb();
+}
+
+
+/*
+ * Read a whole byte from the EEPROM.
+ */
+static int __devinit read_eeprom_byte(struct net_device *dev,
+ unsigned long offset)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+ unsigned long flags;
+ u32 local;
+ int result = 0;
+ short i;
+
+ /*
+ * Don't take interrupts on this CPU will bit banging
+ * the %#%#@$ I2C device
+ */
+ local_irq_save(flags);
+
+ eeprom_start(regs);
+
+ eeprom_prep(regs, EEPROM_WRITE_SELECT);
+ if (eeprom_check_ack(regs)) {
+ local_irq_restore(flags);
+ printk(KERN_ERR "%s: Unable to sync eeprom\n", ap->name);
+ result = -EIO;
+ goto eeprom_read_error;
+ }
+
+ eeprom_prep(regs, (offset >> 8) & 0xff);
+ if (eeprom_check_ack(regs)) {
+ local_irq_restore(flags);
+ printk(KERN_ERR "%s: Unable to set address byte 0\n",
+ ap->name);
+ result = -EIO;
+ goto eeprom_read_error;
+ }
+
+ eeprom_prep(regs, offset & 0xff);
+ if (eeprom_check_ack(regs)) {
+ local_irq_restore(flags);
+ printk(KERN_ERR "%s: Unable to set address byte 1\n",
+ ap->name);
+ result = -EIO;
+ goto eeprom_read_error;
+ }
+
+ eeprom_start(regs);
+ eeprom_prep(regs, EEPROM_READ_SELECT);
+ if (eeprom_check_ack(regs)) {
+ local_irq_restore(flags);
+ printk(KERN_ERR "%s: Unable to set READ_SELECT\n",
+ ap->name);
+ result = -EIO;
+ goto eeprom_read_error;
+ }
+
+ for (i = 0; i < 8; i++) {
+ local = readl(®s->LocalCtrl);
+ local &= ~EEPROM_WRITE_ENABLE;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ udelay(ACE_LONG_DELAY);
+ mb();
+ local |= EEPROM_CLK_OUT;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+ udelay(ACE_SHORT_DELAY);
+ /* sample data mid high clk */
+ result = (result << 1) |
+ ((readl(®s->LocalCtrl) & EEPROM_DATA_IN) != 0);
+ udelay(ACE_SHORT_DELAY);
+ mb();
+ local = readl(®s->LocalCtrl);
+ local &= ~EEPROM_CLK_OUT;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ udelay(ACE_SHORT_DELAY);
+ mb();
+ if (i == 7) {
+ local |= EEPROM_WRITE_ENABLE;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+ udelay(ACE_SHORT_DELAY);
+ }
+ }
+
+ local |= EEPROM_DATA_OUT;
+ writel(local, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+ udelay(ACE_SHORT_DELAY);
+ writel(readl(®s->LocalCtrl) | EEPROM_CLK_OUT, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ udelay(ACE_LONG_DELAY);
+ writel(readl(®s->LocalCtrl) & ~EEPROM_CLK_OUT, ®s->LocalCtrl);
+ readl(®s->LocalCtrl);
+ mb();
+ udelay(ACE_SHORT_DELAY);
+ eeprom_stop(regs);
+
+ local_irq_restore(flags);
+ out:
+ return result;
+
+ eeprom_read_error:
+ printk(KERN_ERR "%s: Unable to read eeprom byte 0x%02lx\n",
+ ap->name, offset);
+ goto out;
+}
--- /dev/null
+#ifndef _ACENIC_H_
+#define _ACENIC_H_
+#include <linux/interrupt.h>
+
+
+/*
+ * Generate TX index update each time, when TX ring is closed.
+ * Normally, this is not useful, because results in more dma (and irqs
+ * without TX_COAL_INTS_ONLY).
+ */
+#define USE_TX_COAL_NOW 0
+
+/*
+ * Addressing:
+ *
+ * The Tigon uses 64-bit host addresses, regardless of their actual
+ * length, and it expects a big-endian format. For 32 bit systems the
+ * upper 32 bits of the address are simply ignored (zero), however for
+ * little endian 64 bit systems (Alpha) this looks strange with the
+ * two parts of the address word being swapped.
+ *
+ * The addresses are split in two 32 bit words for all architectures
+ * as some of them are in PCI shared memory and it is necessary to use
+ * readl/writel to access them.
+ *
+ * The addressing code is derived from Pete Wyckoff's work, but
+ * modified to deal properly with readl/writel usage.
+ */
+
+struct ace_regs {
+ u32 pad0[16]; /* PCI control registers */
+
+ u32 HostCtrl; /* 0x40 */
+ u32 LocalCtrl;
+
+ u32 pad1[2];
+
+ u32 MiscCfg; /* 0x50 */
+
+ u32 pad2[2];
+
+ u32 PciState;
+
+ u32 pad3[2]; /* 0x60 */
+
+ u32 WinBase;
+ u32 WinData;
+
+ u32 pad4[12]; /* 0x70 */
+
+ u32 DmaWriteState; /* 0xa0 */
+ u32 pad5[3];
+ u32 DmaReadState; /* 0xb0 */
+
+ u32 pad6[26];
+
+ u32 AssistState;
+
+ u32 pad7[8]; /* 0x120 */
+
+ u32 CpuCtrl; /* 0x140 */
+ u32 Pc;
+
+ u32 pad8[3];
+
+ u32 SramAddr; /* 0x154 */
+ u32 SramData;
+
+ u32 pad9[49];
+
+ u32 MacRxState; /* 0x220 */
+
+ u32 pad10[7];
+
+ u32 CpuBCtrl; /* 0x240 */
+ u32 PcB;
+
+ u32 pad11[3];
+
+ u32 SramBAddr; /* 0x254 */
+ u32 SramBData;
+
+ u32 pad12[105];
+
+ u32 pad13[32]; /* 0x400 */
+ u32 Stats[32];
+
+ u32 Mb0Hi; /* 0x500 */
+ u32 Mb0Lo;
+ u32 Mb1Hi;
+ u32 CmdPrd;
+ u32 Mb2Hi;
+ u32 TxPrd;
+ u32 Mb3Hi;
+ u32 RxStdPrd;
+ u32 Mb4Hi;
+ u32 RxJumboPrd;
+ u32 Mb5Hi;
+ u32 RxMiniPrd;
+ u32 Mb6Hi;
+ u32 Mb6Lo;
+ u32 Mb7Hi;
+ u32 Mb7Lo;
+ u32 Mb8Hi;
+ u32 Mb8Lo;
+ u32 Mb9Hi;
+ u32 Mb9Lo;
+ u32 MbAHi;
+ u32 MbALo;
+ u32 MbBHi;
+ u32 MbBLo;
+ u32 MbCHi;
+ u32 MbCLo;
+ u32 MbDHi;
+ u32 MbDLo;
+ u32 MbEHi;
+ u32 MbELo;
+ u32 MbFHi;
+ u32 MbFLo;
+
+ u32 pad14[32];
+
+ u32 MacAddrHi; /* 0x600 */
+ u32 MacAddrLo;
+ u32 InfoPtrHi;
+ u32 InfoPtrLo;
+ u32 MultiCastHi; /* 0x610 */
+ u32 MultiCastLo;
+ u32 ModeStat;
+ u32 DmaReadCfg;
+ u32 DmaWriteCfg; /* 0x620 */
+ u32 TxBufRat;
+ u32 EvtCsm;
+ u32 CmdCsm;
+ u32 TuneRxCoalTicks;/* 0x630 */
+ u32 TuneTxCoalTicks;
+ u32 TuneStatTicks;
+ u32 TuneMaxTxDesc;
+ u32 TuneMaxRxDesc; /* 0x640 */
+ u32 TuneTrace;
+ u32 TuneLink;
+ u32 TuneFastLink;
+ u32 TracePtr; /* 0x650 */
+ u32 TraceStrt;
+ u32 TraceLen;
+ u32 IfIdx;
+ u32 IfMtu; /* 0x660 */
+ u32 MaskInt;
+ u32 GigLnkState;
+ u32 FastLnkState;
+ u32 pad16[4]; /* 0x670 */
+ u32 RxRetCsm; /* 0x680 */
+
+ u32 pad17[31];
+
+ u32 CmdRng[64]; /* 0x700 */
+ u32 Window[0x200];
+};
+
+
+typedef struct {
+ u32 addrhi;
+ u32 addrlo;
+} aceaddr;
+
+
+#define ACE_WINDOW_SIZE 0x800
+
+#define ACE_JUMBO_MTU 9000
+#define ACE_STD_MTU 1500
+
+#define ACE_TRACE_SIZE 0x8000
+
+/*
+ * Host control register bits.
+ */
+
+#define IN_INT 0x01
+#define CLR_INT 0x02
+#define HW_RESET 0x08
+#define BYTE_SWAP 0x10
+#define WORD_SWAP 0x20
+#define MASK_INTS 0x40
+
+/*
+ * Local control register bits.
+ */
+
+#define EEPROM_DATA_IN 0x800000
+#define EEPROM_DATA_OUT 0x400000
+#define EEPROM_WRITE_ENABLE 0x200000
+#define EEPROM_CLK_OUT 0x100000
+
+#define EEPROM_BASE 0xa0000000
+
+#define EEPROM_WRITE_SELECT 0xa0
+#define EEPROM_READ_SELECT 0xa1
+
+#define SRAM_BANK_512K 0x200
+
+
+/*
+ * udelay() values for when clocking the eeprom
+ */
+#define ACE_SHORT_DELAY 2
+#define ACE_LONG_DELAY 4
+
+
+/*
+ * Misc Config bits
+ */
+
+#define SYNC_SRAM_TIMING 0x100000
+
+
+/*
+ * CPU state bits.
+ */
+
+#define CPU_RESET 0x01
+#define CPU_TRACE 0x02
+#define CPU_PROM_FAILED 0x10
+#define CPU_HALT 0x00010000
+#define CPU_HALTED 0xffff0000
+
+
+/*
+ * PCI State bits.
+ */
+
+#define DMA_READ_MAX_4 0x04
+#define DMA_READ_MAX_16 0x08
+#define DMA_READ_MAX_32 0x0c
+#define DMA_READ_MAX_64 0x10
+#define DMA_READ_MAX_128 0x14
+#define DMA_READ_MAX_256 0x18
+#define DMA_READ_MAX_1K 0x1c
+#define DMA_WRITE_MAX_4 0x20
+#define DMA_WRITE_MAX_16 0x40
+#define DMA_WRITE_MAX_32 0x60
+#define DMA_WRITE_MAX_64 0x80
+#define DMA_WRITE_MAX_128 0xa0
+#define DMA_WRITE_MAX_256 0xc0
+#define DMA_WRITE_MAX_1K 0xe0
+#define DMA_READ_WRITE_MASK 0xfc
+#define MEM_READ_MULTIPLE 0x00020000
+#define PCI_66MHZ 0x00080000
+#define PCI_32BIT 0x00100000
+#define DMA_WRITE_ALL_ALIGN 0x00800000
+#define READ_CMD_MEM 0x06000000
+#define WRITE_CMD_MEM 0x70000000
+
+
+/*
+ * Mode status
+ */
+
+#define ACE_BYTE_SWAP_BD 0x02
+#define ACE_WORD_SWAP_BD 0x04 /* not actually used */
+#define ACE_WARN 0x08
+#define ACE_BYTE_SWAP_DMA 0x10
+#define ACE_NO_JUMBO_FRAG 0x200
+#define ACE_FATAL 0x40000000
+
+
+/*
+ * DMA config
+ */
+
+#define DMA_THRESH_1W 0x10
+#define DMA_THRESH_2W 0x20
+#define DMA_THRESH_4W 0x40
+#define DMA_THRESH_8W 0x80
+#define DMA_THRESH_16W 0x100
+#define DMA_THRESH_32W 0x0 /* not described in doc, but exists. */
+
+
+/*
+ * Tuning parameters
+ */
+
+#define TICKS_PER_SEC 1000000
+
+
+/*
+ * Link bits
+ */
+
+#define LNK_PREF 0x00008000
+#define LNK_10MB 0x00010000
+#define LNK_100MB 0x00020000
+#define LNK_1000MB 0x00040000
+#define LNK_FULL_DUPLEX 0x00080000
+#define LNK_HALF_DUPLEX 0x00100000
+#define LNK_TX_FLOW_CTL_Y 0x00200000
+#define LNK_NEG_ADVANCED 0x00400000
+#define LNK_RX_FLOW_CTL_Y 0x00800000
+#define LNK_NIC 0x01000000
+#define LNK_JAM 0x02000000
+#define LNK_JUMBO 0x04000000
+#define LNK_ALTEON 0x08000000
+#define LNK_NEG_FCTL 0x10000000
+#define LNK_NEGOTIATE 0x20000000
+#define LNK_ENABLE 0x40000000
+#define LNK_UP 0x80000000
+
+
+/*
+ * Event definitions
+ */
+
+#define EVT_RING_ENTRIES 256
+#define EVT_RING_SIZE (EVT_RING_ENTRIES * sizeof(struct event))
+
+struct event {
+#ifdef __LITTLE_ENDIAN_BITFIELD
+ u32 idx:12;
+ u32 code:12;
+ u32 evt:8;
+#else
+ u32 evt:8;
+ u32 code:12;
+ u32 idx:12;
+#endif
+ u32 pad;
+};
+
+
+/*
+ * Events
+ */
+
+#define E_FW_RUNNING 0x01
+#define E_STATS_UPDATED 0x04
+
+#define E_STATS_UPDATE 0x04
+
+#define E_LNK_STATE 0x06
+#define E_C_LINK_UP 0x01
+#define E_C_LINK_DOWN 0x02
+#define E_C_LINK_10_100 0x03
+
+#define E_ERROR 0x07
+#define E_C_ERR_INVAL_CMD 0x01
+#define E_C_ERR_UNIMP_CMD 0x02
+#define E_C_ERR_BAD_CFG 0x03
+
+#define E_MCAST_LIST 0x08
+#define E_C_MCAST_ADDR_ADD 0x01
+#define E_C_MCAST_ADDR_DEL 0x02
+
+#define E_RESET_JUMBO_RNG 0x09
+
+
+/*
+ * Commands
+ */
+
+#define CMD_RING_ENTRIES 64
+
+struct cmd {
+#ifdef __LITTLE_ENDIAN_BITFIELD
+ u32 idx:12;
+ u32 code:12;
+ u32 evt:8;
+#else
+ u32 evt:8;
+ u32 code:12;
+ u32 idx:12;
+#endif
+};
+
+
+#define C_HOST_STATE 0x01
+#define C_C_STACK_UP 0x01
+#define C_C_STACK_DOWN 0x02
+
+#define C_FDR_FILTERING 0x02
+#define C_C_FDR_FILT_ENABLE 0x01
+#define C_C_FDR_FILT_DISABLE 0x02
+
+#define C_SET_RX_PRD_IDX 0x03
+#define C_UPDATE_STATS 0x04
+#define C_RESET_JUMBO_RNG 0x05
+#define C_ADD_MULTICAST_ADDR 0x08
+#define C_DEL_MULTICAST_ADDR 0x09
+
+#define C_SET_PROMISC_MODE 0x0a
+#define C_C_PROMISC_ENABLE 0x01
+#define C_C_PROMISC_DISABLE 0x02
+
+#define C_LNK_NEGOTIATION 0x0b
+#define C_C_NEGOTIATE_BOTH 0x00
+#define C_C_NEGOTIATE_GIG 0x01
+#define C_C_NEGOTIATE_10_100 0x02
+
+#define C_SET_MAC_ADDR 0x0c
+#define C_CLEAR_PROFILE 0x0d
+
+#define C_SET_MULTICAST_MODE 0x0e
+#define C_C_MCAST_ENABLE 0x01
+#define C_C_MCAST_DISABLE 0x02
+
+#define C_CLEAR_STATS 0x0f
+#define C_SET_RX_JUMBO_PRD_IDX 0x10
+#define C_REFRESH_STATS 0x11
+
+
+/*
+ * Descriptor flags
+ */
+#define BD_FLG_TCP_UDP_SUM 0x01
+#define BD_FLG_IP_SUM 0x02
+#define BD_FLG_END 0x04
+#define BD_FLG_MORE 0x08
+#define BD_FLG_JUMBO 0x10
+#define BD_FLG_UCAST 0x20
+#define BD_FLG_MCAST 0x40
+#define BD_FLG_BCAST 0x60
+#define BD_FLG_TYP_MASK 0x60
+#define BD_FLG_IP_FRAG 0x80
+#define BD_FLG_IP_FRAG_END 0x100
+#define BD_FLG_VLAN_TAG 0x200
+#define BD_FLG_FRAME_ERROR 0x400
+#define BD_FLG_COAL_NOW 0x800
+#define BD_FLG_MINI 0x1000
+
+
+/*
+ * Ring Control block flags
+ */
+#define RCB_FLG_TCP_UDP_SUM 0x01
+#define RCB_FLG_IP_SUM 0x02
+#define RCB_FLG_NO_PSEUDO_HDR 0x08
+#define RCB_FLG_VLAN_ASSIST 0x10
+#define RCB_FLG_COAL_INT_ONLY 0x20
+#define RCB_FLG_TX_HOST_RING 0x40
+#define RCB_FLG_IEEE_SNAP_SUM 0x80
+#define RCB_FLG_EXT_RX_BD 0x100
+#define RCB_FLG_RNG_DISABLE 0x200
+
+
+/*
+ * TX ring - maximum TX ring entries for Tigon I's is 128
+ */
+#define MAX_TX_RING_ENTRIES 256
+#define TIGON_I_TX_RING_ENTRIES 128
+#define TX_RING_SIZE (MAX_TX_RING_ENTRIES * sizeof(struct tx_desc))
+#define TX_RING_BASE 0x3800
+
+struct tx_desc{
+ aceaddr addr;
+ u32 flagsize;
+#if 0
+/*
+ * This is in PCI shared mem and must be accessed with readl/writel
+ * real layout is:
+ */
+#if __LITTLE_ENDIAN
+ u16 flags;
+ u16 size;
+ u16 vlan;
+ u16 reserved;
+#else
+ u16 size;
+ u16 flags;
+ u16 reserved;
+ u16 vlan;
+#endif
+#endif
+ u32 vlanres;
+};
+
+
+#define RX_STD_RING_ENTRIES 512
+#define RX_STD_RING_SIZE (RX_STD_RING_ENTRIES * sizeof(struct rx_desc))
+
+#define RX_JUMBO_RING_ENTRIES 256
+#define RX_JUMBO_RING_SIZE (RX_JUMBO_RING_ENTRIES *sizeof(struct rx_desc))
+
+#define RX_MINI_RING_ENTRIES 1024
+#define RX_MINI_RING_SIZE (RX_MINI_RING_ENTRIES *sizeof(struct rx_desc))
+
+#define RX_RETURN_RING_ENTRIES 2048
+#define RX_RETURN_RING_SIZE (RX_MAX_RETURN_RING_ENTRIES * \
+ sizeof(struct rx_desc))
+
+struct rx_desc{
+ aceaddr addr;
+#ifdef __LITTLE_ENDIAN
+ u16 size;
+ u16 idx;
+#else
+ u16 idx;
+ u16 size;
+#endif
+#ifdef __LITTLE_ENDIAN
+ u16 flags;
+ u16 type;
+#else
+ u16 type;
+ u16 flags;
+#endif
+#ifdef __LITTLE_ENDIAN
+ u16 tcp_udp_csum;
+ u16 ip_csum;
+#else
+ u16 ip_csum;
+ u16 tcp_udp_csum;
+#endif
+#ifdef __LITTLE_ENDIAN
+ u16 vlan;
+ u16 err_flags;
+#else
+ u16 err_flags;
+ u16 vlan;
+#endif
+ u32 reserved;
+ u32 opague;
+};
+
+
+/*
+ * This struct is shared with the NIC firmware.
+ */
+struct ring_ctrl {
+ aceaddr rngptr;
+#ifdef __LITTLE_ENDIAN
+ u16 flags;
+ u16 max_len;
+#else
+ u16 max_len;
+ u16 flags;
+#endif
+ u32 pad;
+};
+
+
+struct ace_mac_stats {
+ u32 excess_colls;
+ u32 coll_1;
+ u32 coll_2;
+ u32 coll_3;
+ u32 coll_4;
+ u32 coll_5;
+ u32 coll_6;
+ u32 coll_7;
+ u32 coll_8;
+ u32 coll_9;
+ u32 coll_10;
+ u32 coll_11;
+ u32 coll_12;
+ u32 coll_13;
+ u32 coll_14;
+ u32 coll_15;
+ u32 late_coll;
+ u32 defers;
+ u32 crc_err;
+ u32 underrun;
+ u32 crs_err;
+ u32 pad[3];
+ u32 drop_ula;
+ u32 drop_mc;
+ u32 drop_fc;
+ u32 drop_space;
+ u32 coll;
+ u32 kept_bc;
+ u32 kept_mc;
+ u32 kept_uc;
+};
+
+
+struct ace_info {
+ union {
+ u32 stats[256];
+ } s;
+ struct ring_ctrl evt_ctrl;
+ struct ring_ctrl cmd_ctrl;
+ struct ring_ctrl tx_ctrl;
+ struct ring_ctrl rx_std_ctrl;
+ struct ring_ctrl rx_jumbo_ctrl;
+ struct ring_ctrl rx_mini_ctrl;
+ struct ring_ctrl rx_return_ctrl;
+ aceaddr evt_prd_ptr;
+ aceaddr rx_ret_prd_ptr;
+ aceaddr tx_csm_ptr;
+ aceaddr stats2_ptr;
+};
+
+
+struct ring_info {
+ struct sk_buff *skb;
+ DEFINE_DMA_UNMAP_ADDR(mapping);
+};
+
+
+/*
+ * Funny... As soon as we add maplen on alpha, it starts to work
+ * much slower. Hmm... is it because struct does not fit to one cacheline?
+ * So, split tx_ring_info.
+ */
+struct tx_ring_info {
+ struct sk_buff *skb;
+ DEFINE_DMA_UNMAP_ADDR(mapping);
+ DEFINE_DMA_UNMAP_LEN(maplen);
+};
+
+
+/*
+ * struct ace_skb holding the rings of skb's. This is an awful lot of
+ * pointers, but I don't see any other smart mode to do this in an
+ * efficient manner ;-(
+ */
+struct ace_skb
+{
+ struct tx_ring_info tx_skbuff[MAX_TX_RING_ENTRIES];
+ struct ring_info rx_std_skbuff[RX_STD_RING_ENTRIES];
+ struct ring_info rx_mini_skbuff[RX_MINI_RING_ENTRIES];
+ struct ring_info rx_jumbo_skbuff[RX_JUMBO_RING_ENTRIES];
+};
+
+
+/*
+ * Struct private for the AceNIC.
+ *
+ * Elements are grouped so variables used by the tx handling goes
+ * together, and will go into the same cache lines etc. in order to
+ * avoid cache line contention between the rx and tx handling on SMP.
+ *
+ * Frequently accessed variables are put at the beginning of the
+ * struct to help the compiler generate better/shorter code.
+ */
+struct ace_private
+{
+ struct ace_info *info;
+ struct ace_regs __iomem *regs; /* register base */
+ struct ace_skb *skb;
+ dma_addr_t info_dma; /* 32/64 bit */
+
+ int version, link;
+ int promisc, mcast_all;
+
+ /*
+ * TX elements
+ */
+ struct tx_desc *tx_ring;
+ u32 tx_prd;
+ volatile u32 tx_ret_csm;
+ int tx_ring_entries;
+
+ /*
+ * RX elements
+ */
+ unsigned long std_refill_busy
+ __attribute__ ((aligned (SMP_CACHE_BYTES)));
+ unsigned long mini_refill_busy, jumbo_refill_busy;
+ atomic_t cur_rx_bufs;
+ atomic_t cur_mini_bufs;
+ atomic_t cur_jumbo_bufs;
+ u32 rx_std_skbprd, rx_mini_skbprd, rx_jumbo_skbprd;
+ u32 cur_rx;
+
+ struct rx_desc *rx_std_ring;
+ struct rx_desc *rx_jumbo_ring;
+ struct rx_desc *rx_mini_ring;
+ struct rx_desc *rx_return_ring;
+
+ int tasklet_pending, jumbo;
+ struct tasklet_struct ace_tasklet;
+
+ struct event *evt_ring;
+
+ volatile u32 *evt_prd, *rx_ret_prd, *tx_csm;
+
+ dma_addr_t tx_ring_dma; /* 32/64 bit */
+ dma_addr_t rx_ring_base_dma;
+ dma_addr_t evt_ring_dma;
+ dma_addr_t evt_prd_dma, rx_ret_prd_dma, tx_csm_dma;
+
+ unsigned char *trace_buf;
+ struct pci_dev *pdev;
+ struct net_device *next;
+ volatile int fw_running;
+ int board_idx;
+ u16 pci_command;
+ u8 pci_latency;
+ const char *name;
+#ifdef INDEX_DEBUG
+ spinlock_t debug_lock
+ __attribute__ ((aligned (SMP_CACHE_BYTES)));
+ u32 last_tx, last_std_rx, last_mini_rx;
+#endif
+ int pci_using_dac;
+ u8 firmware_major;
+ u8 firmware_minor;
+ u8 firmware_fix;
+ u32 firmware_start;
+};
+
+
+#define TX_RESERVED MAX_SKB_FRAGS
+
+static inline int tx_space (struct ace_private *ap, u32 csm, u32 prd)
+{
+ return (csm - prd - 1) & (ACE_TX_RING_ENTRIES(ap) - 1);
+}
+
+#define tx_free(ap) tx_space((ap)->tx_ret_csm, (ap)->tx_prd, ap)
+#define tx_ring_full(ap, csm, prd) (tx_space(ap, csm, prd) <= TX_RESERVED)
+
+static inline void set_aceaddr(aceaddr *aa, dma_addr_t addr)
+{
+ u64 baddr = (u64) addr;
+ aa->addrlo = baddr & 0xffffffff;
+ aa->addrhi = baddr >> 32;
+ wmb();
+}
+
+
+static inline void ace_set_txprd(struct ace_regs __iomem *regs,
+ struct ace_private *ap, u32 value)
+{
+#ifdef INDEX_DEBUG
+ unsigned long flags;
+ spin_lock_irqsave(&ap->debug_lock, flags);
+ writel(value, ®s->TxPrd);
+ if (value == ap->last_tx)
+ printk(KERN_ERR "AceNIC RACE ALERT! writing identical value "
+ "to tx producer (%i)\n", value);
+ ap->last_tx = value;
+ spin_unlock_irqrestore(&ap->debug_lock, flags);
+#else
+ writel(value, ®s->TxPrd);
+#endif
+ wmb();
+}
+
+
+static inline void ace_mask_irq(struct net_device *dev)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+
+ if (ACE_IS_TIGON_I(ap))
+ writel(1, ®s->MaskInt);
+ else
+ writel(readl(®s->HostCtrl) | MASK_INTS, ®s->HostCtrl);
+
+ ace_sync_irq(dev->irq);
+}
+
+
+static inline void ace_unmask_irq(struct net_device *dev)
+{
+ struct ace_private *ap = netdev_priv(dev);
+ struct ace_regs __iomem *regs = ap->regs;
+
+ if (ACE_IS_TIGON_I(ap))
+ writel(0, ®s->MaskInt);
+ else
+ writel(readl(®s->HostCtrl) & ~MASK_INTS, ®s->HostCtrl);
+}
+
+
+/*
+ * Prototypes
+ */
+static int ace_init(struct net_device *dev);
+static void ace_load_std_rx_ring(struct net_device *dev, int nr_bufs);
+static void ace_load_mini_rx_ring(struct net_device *dev, int nr_bufs);
+static void ace_load_jumbo_rx_ring(struct net_device *dev, int nr_bufs);
+static irqreturn_t ace_interrupt(int irq, void *dev_id);
+static int ace_load_firmware(struct net_device *dev);
+static int ace_open(struct net_device *dev);
+static netdev_tx_t ace_start_xmit(struct sk_buff *skb,
+ struct net_device *dev);
+static int ace_close(struct net_device *dev);
+static void ace_tasklet(unsigned long dev);
+static void ace_dump_trace(struct ace_private *ap);
+static void ace_set_multicast_list(struct net_device *dev);
+static int ace_change_mtu(struct net_device *dev, int new_mtu);
+static int ace_set_mac_addr(struct net_device *dev, void *p);
+static void ace_set_rxtx_parms(struct net_device *dev, int jumbo);
+static int ace_allocate_descriptors(struct net_device *dev);
+static void ace_free_descriptors(struct net_device *dev);
+static void ace_init_cleanup(struct net_device *dev);
+static struct net_device_stats *ace_get_stats(struct net_device *dev);
+static int read_eeprom_byte(struct net_device *dev, unsigned long offset);
+
+#endif /* _ACENIC_H_ */
--- /dev/null
+/* typhoon.c: A Linux Ethernet device driver for 3Com 3CR990 family of NICs */
+/*
+ Written 2002-2004 by David Dillow <dave@thedillows.org>
+ Based on code written 1998-2000 by Donald Becker <becker@scyld.com> and
+ Linux 2.2.x driver by David P. McLean <davidpmclean@yahoo.com>.
+
+ This software may be used and distributed according to the terms of
+ the GNU General Public License (GPL), incorporated herein by reference.
+ Drivers based on or derived from this code fall under the GPL and must
+ retain the authorship, copyright and license notice. This file is not
+ a complete program and may only be used when the entire operating
+ system is licensed under the GPL.
+
+ This software is available on a public web site. It may enable
+ cryptographic capabilities of the 3Com hardware, and may be
+ exported from the United States under License Exception "TSU"
+ pursuant to 15 C.F.R. Section 740.13(e).
+
+ This work was funded by the National Library of Medicine under
+ the Department of Energy project number 0274DD06D1 and NLM project
+ number Y1-LM-2015-01.
+
+ This driver is designed for the 3Com 3CR990 Family of cards with the
+ 3XP Processor. It has been tested on x86 and sparc64.
+
+ KNOWN ISSUES:
+ *) Cannot DMA Rx packets to a 2 byte aligned address. Also firmware
+ issue. Hopefully 3Com will fix it.
+ *) Waiting for a command response takes 8ms due to non-preemptable
+ polling. Only significant for getting stats and creating
+ SAs, but an ugly wart never the less.
+
+ TODO:
+ *) Doesn't do IPSEC offloading. Yet. Keep yer pants on, it's coming.
+ *) Add more support for ethtool (especially for NIC stats)
+ *) Allow disabling of RX checksum offloading
+ *) Fix MAC changing to work while the interface is up
+ (Need to put commands on the TX ring, which changes
+ the locking)
+ *) Add in FCS to {rx,tx}_bytes, since the hardware doesn't. See
+ http://oss.sgi.com/cgi-bin/mesg.cgi?a=netdev&i=20031215152211.7003fe8e.rddunlap%40osdl.org
+*/
+
+/* Set the copy breakpoint for the copy-only-tiny-frames scheme.
+ * Setting to > 1518 effectively disables this feature.
+ */
+static int rx_copybreak = 200;
+
+/* Should we use MMIO or Port IO?
+ * 0: Port IO
+ * 1: MMIO
+ * 2: Try MMIO, fallback to Port IO
+ */
+static unsigned int use_mmio = 2;
+
+/* end user-configurable values */
+
+/* Maximum number of multicast addresses to filter (vs. rx-all-multicast).
+ */
+static const int multicast_filter_limit = 32;
+
+/* Operational parameters that are set at compile time. */
+
+/* Keep the ring sizes a power of two for compile efficiency.
+ * The compiler will convert <unsigned>'%'<2^N> into a bit mask.
+ * Making the Tx ring too large decreases the effectiveness of channel
+ * bonding and packet priority.
+ * There are no ill effects from too-large receive rings.
+ *
+ * We don't currently use the Hi Tx ring so, don't make it very big.
+ *
+ * Beware that if we start using the Hi Tx ring, we will need to change
+ * typhoon_num_free_tx() and typhoon_tx_complete() to account for that.
+ */
+#define TXHI_ENTRIES 2
+#define TXLO_ENTRIES 128
+#define RX_ENTRIES 32
+#define COMMAND_ENTRIES 16
+#define RESPONSE_ENTRIES 32
+
+#define COMMAND_RING_SIZE (COMMAND_ENTRIES * sizeof(struct cmd_desc))
+#define RESPONSE_RING_SIZE (RESPONSE_ENTRIES * sizeof(struct resp_desc))
+
+/* The 3XP will preload and remove 64 entries from the free buffer
+ * list, and we need one entry to keep the ring from wrapping, so
+ * to keep this a power of two, we use 128 entries.
+ */
+#define RXFREE_ENTRIES 128
+#define RXENT_ENTRIES (RXFREE_ENTRIES - 1)
+
+/* Operational parameters that usually are not changed. */
+
+/* Time in jiffies before concluding the transmitter is hung. */
+#define TX_TIMEOUT (2*HZ)
+
+#define PKT_BUF_SZ 1536
+#define FIRMWARE_NAME "3com/typhoon.bin"
+
+#define pr_fmt(fmt) KBUILD_MODNAME " " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/timer.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/mm.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/ethtool.h>
+#include <linux/if_vlan.h>
+#include <linux/crc32.h>
+#include <linux/bitops.h>
+#include <asm/processor.h>
+#include <asm/io.h>
+#include <asm/uaccess.h>
+#include <linux/in6.h>
+#include <linux/dma-mapping.h>
+#include <linux/firmware.h>
+
+#include "typhoon.h"
+
+MODULE_AUTHOR("David Dillow <dave@thedillows.org>");
+MODULE_VERSION("1.0");
+MODULE_LICENSE("GPL");
+MODULE_FIRMWARE(FIRMWARE_NAME);
+MODULE_DESCRIPTION("3Com Typhoon Family (3C990, 3CR990, and variants)");
+MODULE_PARM_DESC(rx_copybreak, "Packets smaller than this are copied and "
+ "the buffer given back to the NIC. Default "
+ "is 200.");
+MODULE_PARM_DESC(use_mmio, "Use MMIO (1) or PIO(0) to access the NIC. "
+ "Default is to try MMIO and fallback to PIO.");
+module_param(rx_copybreak, int, 0);
+module_param(use_mmio, int, 0);
+
+#if defined(NETIF_F_TSO) && MAX_SKB_FRAGS > 32
+#warning Typhoon only supports 32 entries in its SG list for TSO, disabling TSO
+#undef NETIF_F_TSO
+#endif
+
+#if TXLO_ENTRIES <= (2 * MAX_SKB_FRAGS)
+#error TX ring too small!
+#endif
+
+struct typhoon_card_info {
+ const char *name;
+ const int capabilities;
+};
+
+#define TYPHOON_CRYPTO_NONE 0x00
+#define TYPHOON_CRYPTO_DES 0x01
+#define TYPHOON_CRYPTO_3DES 0x02
+#define TYPHOON_CRYPTO_VARIABLE 0x04
+#define TYPHOON_FIBER 0x08
+#define TYPHOON_WAKEUP_NEEDS_RESET 0x10
+
+enum typhoon_cards {
+ TYPHOON_TX = 0, TYPHOON_TX95, TYPHOON_TX97, TYPHOON_SVR,
+ TYPHOON_SVR95, TYPHOON_SVR97, TYPHOON_TXM, TYPHOON_BSVR,
+ TYPHOON_FX95, TYPHOON_FX97, TYPHOON_FX95SVR, TYPHOON_FX97SVR,
+ TYPHOON_FXM,
+};
+
+/* directly indexed by enum typhoon_cards, above */
+static struct typhoon_card_info typhoon_card_info[] __devinitdata = {
+ { "3Com Typhoon (3C990-TX)",
+ TYPHOON_CRYPTO_NONE},
+ { "3Com Typhoon (3CR990-TX-95)",
+ TYPHOON_CRYPTO_DES},
+ { "3Com Typhoon (3CR990-TX-97)",
+ TYPHOON_CRYPTO_DES | TYPHOON_CRYPTO_3DES},
+ { "3Com Typhoon (3C990SVR)",
+ TYPHOON_CRYPTO_NONE},
+ { "3Com Typhoon (3CR990SVR95)",
+ TYPHOON_CRYPTO_DES},
+ { "3Com Typhoon (3CR990SVR97)",
+ TYPHOON_CRYPTO_DES | TYPHOON_CRYPTO_3DES},
+ { "3Com Typhoon2 (3C990B-TX-M)",
+ TYPHOON_CRYPTO_VARIABLE},
+ { "3Com Typhoon2 (3C990BSVR)",
+ TYPHOON_CRYPTO_VARIABLE},
+ { "3Com Typhoon (3CR990-FX-95)",
+ TYPHOON_CRYPTO_DES | TYPHOON_FIBER},
+ { "3Com Typhoon (3CR990-FX-97)",
+ TYPHOON_CRYPTO_DES | TYPHOON_CRYPTO_3DES | TYPHOON_FIBER},
+ { "3Com Typhoon (3CR990-FX-95 Server)",
+ TYPHOON_CRYPTO_DES | TYPHOON_FIBER},
+ { "3Com Typhoon (3CR990-FX-97 Server)",
+ TYPHOON_CRYPTO_DES | TYPHOON_CRYPTO_3DES | TYPHOON_FIBER},
+ { "3Com Typhoon2 (3C990B-FX-97)",
+ TYPHOON_CRYPTO_VARIABLE | TYPHOON_FIBER},
+};
+
+/* Notes on the new subsystem numbering scheme:
+ * bits 0-1 indicate crypto capabilities: (0) variable, (1) DES, or (2) 3DES
+ * bit 4 indicates if this card has secured firmware (we don't support it)
+ * bit 8 indicates if this is a (0) copper or (1) fiber card
+ * bits 12-16 indicate card type: (0) client and (1) server
+ */
+static DEFINE_PCI_DEVICE_TABLE(typhoon_pci_tbl) = {
+ { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,TYPHOON_TX },
+ { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990_TX_95,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, TYPHOON_TX95 },
+ { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990_TX_97,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, TYPHOON_TX97 },
+ { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990B,
+ PCI_ANY_ID, 0x1000, 0, 0, TYPHOON_TXM },
+ { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990B,
+ PCI_ANY_ID, 0x1102, 0, 0, TYPHOON_FXM },
+ { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990B,
+ PCI_ANY_ID, 0x2000, 0, 0, TYPHOON_BSVR },
+ { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990_FX,
+ PCI_ANY_ID, 0x1101, 0, 0, TYPHOON_FX95 },
+ { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990_FX,
+ PCI_ANY_ID, 0x1102, 0, 0, TYPHOON_FX97 },
+ { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990_FX,
+ PCI_ANY_ID, 0x2101, 0, 0, TYPHOON_FX95SVR },
+ { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990_FX,
+ PCI_ANY_ID, 0x2102, 0, 0, TYPHOON_FX97SVR },
+ { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990SVR95,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, TYPHOON_SVR95 },
+ { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990SVR97,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, TYPHOON_SVR97 },
+ { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990SVR,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, TYPHOON_SVR },
+ { 0, }
+};
+MODULE_DEVICE_TABLE(pci, typhoon_pci_tbl);
+
+/* Define the shared memory area
+ * Align everything the 3XP will normally be using.
+ * We'll need to move/align txHi if we start using that ring.
+ */
+#define __3xp_aligned ____cacheline_aligned
+struct typhoon_shared {
+ struct typhoon_interface iface;
+ struct typhoon_indexes indexes __3xp_aligned;
+ struct tx_desc txLo[TXLO_ENTRIES] __3xp_aligned;
+ struct rx_desc rxLo[RX_ENTRIES] __3xp_aligned;
+ struct rx_desc rxHi[RX_ENTRIES] __3xp_aligned;
+ struct cmd_desc cmd[COMMAND_ENTRIES] __3xp_aligned;
+ struct resp_desc resp[RESPONSE_ENTRIES] __3xp_aligned;
+ struct rx_free rxBuff[RXFREE_ENTRIES] __3xp_aligned;
+ u32 zeroWord;
+ struct tx_desc txHi[TXHI_ENTRIES];
+} __packed;
+
+struct rxbuff_ent {
+ struct sk_buff *skb;
+ dma_addr_t dma_addr;
+};
+
+struct typhoon {
+ /* Tx cache line section */
+ struct transmit_ring txLoRing ____cacheline_aligned;
+ struct pci_dev * tx_pdev;
+ void __iomem *tx_ioaddr;
+ u32 txlo_dma_addr;
+
+ /* Irq/Rx cache line section */
+ void __iomem *ioaddr ____cacheline_aligned;
+ struct typhoon_indexes *indexes;
+ u8 awaiting_resp;
+ u8 duplex;
+ u8 speed;
+ u8 card_state;
+ struct basic_ring rxLoRing;
+ struct pci_dev * pdev;
+ struct net_device * dev;
+ struct napi_struct napi;
+ struct basic_ring rxHiRing;
+ struct basic_ring rxBuffRing;
+ struct rxbuff_ent rxbuffers[RXENT_ENTRIES];
+
+ /* general section */
+ spinlock_t command_lock ____cacheline_aligned;
+ struct basic_ring cmdRing;
+ struct basic_ring respRing;
+ struct net_device_stats stats;
+ struct net_device_stats stats_saved;
+ struct typhoon_shared * shared;
+ dma_addr_t shared_dma;
+ __le16 xcvr_select;
+ __le16 wol_events;
+ __le32 offload;
+
+ /* unused stuff (future use) */
+ int capabilities;
+ struct transmit_ring txHiRing;
+};
+
+enum completion_wait_values {
+ NoWait = 0, WaitNoSleep, WaitSleep,
+};
+
+/* These are the values for the typhoon.card_state variable.
+ * These determine where the statistics will come from in get_stats().
+ * The sleep image does not support the statistics we need.
+ */
+enum state_values {
+ Sleeping = 0, Running,
+};
+
+/* PCI writes are not guaranteed to be posted in order, but outstanding writes
+ * cannot pass a read, so this forces current writes to post.
+ */
+#define typhoon_post_pci_writes(x) \
+ do { if(likely(use_mmio)) ioread32(x+TYPHOON_REG_HEARTBEAT); } while(0)
+
+/* We'll wait up to six seconds for a reset, and half a second normally.
+ */
+#define TYPHOON_UDELAY 50
+#define TYPHOON_RESET_TIMEOUT_SLEEP (6 * HZ)
+#define TYPHOON_RESET_TIMEOUT_NOSLEEP ((6 * 1000000) / TYPHOON_UDELAY)
+#define TYPHOON_WAIT_TIMEOUT ((1000000 / 2) / TYPHOON_UDELAY)
+
+#if defined(NETIF_F_TSO)
+#define skb_tso_size(x) (skb_shinfo(x)->gso_size)
+#define TSO_NUM_DESCRIPTORS 2
+#define TSO_OFFLOAD_ON TYPHOON_OFFLOAD_TCP_SEGMENT
+#else
+#define NETIF_F_TSO 0
+#define skb_tso_size(x) 0
+#define TSO_NUM_DESCRIPTORS 0
+#define TSO_OFFLOAD_ON 0
+#endif
+
+static inline void
+typhoon_inc_index(u32 *index, const int count, const int num_entries)
+{
+ /* Increment a ring index -- we can use this for all rings execept
+ * the Rx rings, as they use different size descriptors
+ * otherwise, everything is the same size as a cmd_desc
+ */
+ *index += count * sizeof(struct cmd_desc);
+ *index %= num_entries * sizeof(struct cmd_desc);
+}
+
+static inline void
+typhoon_inc_cmd_index(u32 *index, const int count)
+{
+ typhoon_inc_index(index, count, COMMAND_ENTRIES);
+}
+
+static inline void
+typhoon_inc_resp_index(u32 *index, const int count)
+{
+ typhoon_inc_index(index, count, RESPONSE_ENTRIES);
+}
+
+static inline void
+typhoon_inc_rxfree_index(u32 *index, const int count)
+{
+ typhoon_inc_index(index, count, RXFREE_ENTRIES);
+}
+
+static inline void
+typhoon_inc_tx_index(u32 *index, const int count)
+{
+ /* if we start using the Hi Tx ring, this needs updateing */
+ typhoon_inc_index(index, count, TXLO_ENTRIES);
+}
+
+static inline void
+typhoon_inc_rx_index(u32 *index, const int count)
+{
+ /* sizeof(struct rx_desc) != sizeof(struct cmd_desc) */
+ *index += count * sizeof(struct rx_desc);
+ *index %= RX_ENTRIES * sizeof(struct rx_desc);
+}
+
+static int
+typhoon_reset(void __iomem *ioaddr, int wait_type)
+{
+ int i, err = 0;
+ int timeout;
+
+ if(wait_type == WaitNoSleep)
+ timeout = TYPHOON_RESET_TIMEOUT_NOSLEEP;
+ else
+ timeout = TYPHOON_RESET_TIMEOUT_SLEEP;
+
+ iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_MASK);
+ iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_STATUS);
+
+ iowrite32(TYPHOON_RESET_ALL, ioaddr + TYPHOON_REG_SOFT_RESET);
+ typhoon_post_pci_writes(ioaddr);
+ udelay(1);
+ iowrite32(TYPHOON_RESET_NONE, ioaddr + TYPHOON_REG_SOFT_RESET);
+
+ if(wait_type != NoWait) {
+ for(i = 0; i < timeout; i++) {
+ if(ioread32(ioaddr + TYPHOON_REG_STATUS) ==
+ TYPHOON_STATUS_WAITING_FOR_HOST)
+ goto out;
+
+ if(wait_type == WaitSleep)
+ schedule_timeout_uninterruptible(1);
+ else
+ udelay(TYPHOON_UDELAY);
+ }
+
+ err = -ETIMEDOUT;
+ }
+
+out:
+ iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_MASK);
+ iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_STATUS);
+
+ /* The 3XP seems to need a little extra time to complete the load
+ * of the sleep image before we can reliably boot it. Failure to
+ * do this occasionally results in a hung adapter after boot in
+ * typhoon_init_one() while trying to read the MAC address or
+ * putting the card to sleep. 3Com's driver waits 5ms, but
+ * that seems to be overkill. However, if we can sleep, we might
+ * as well give it that much time. Otherwise, we'll give it 500us,
+ * which should be enough (I've see it work well at 100us, but still
+ * saw occasional problems.)
+ */
+ if(wait_type == WaitSleep)
+ msleep(5);
+ else
+ udelay(500);
+ return err;
+}
+
+static int
+typhoon_wait_status(void __iomem *ioaddr, u32 wait_value)
+{
+ int i, err = 0;
+
+ for(i = 0; i < TYPHOON_WAIT_TIMEOUT; i++) {
+ if(ioread32(ioaddr + TYPHOON_REG_STATUS) == wait_value)
+ goto out;
+ udelay(TYPHOON_UDELAY);
+ }
+
+ err = -ETIMEDOUT;
+
+out:
+ return err;
+}
+
+static inline void
+typhoon_media_status(struct net_device *dev, struct resp_desc *resp)
+{
+ if(resp->parm1 & TYPHOON_MEDIA_STAT_NO_LINK)
+ netif_carrier_off(dev);
+ else
+ netif_carrier_on(dev);
+}
+
+static inline void
+typhoon_hello(struct typhoon *tp)
+{
+ struct basic_ring *ring = &tp->cmdRing;
+ struct cmd_desc *cmd;
+
+ /* We only get a hello request if we've not sent anything to the
+ * card in a long while. If the lock is held, then we're in the
+ * process of issuing a command, so we don't need to respond.
+ */
+ if(spin_trylock(&tp->command_lock)) {
+ cmd = (struct cmd_desc *)(ring->ringBase + ring->lastWrite);
+ typhoon_inc_cmd_index(&ring->lastWrite, 1);
+
+ INIT_COMMAND_NO_RESPONSE(cmd, TYPHOON_CMD_HELLO_RESP);
+ wmb();
+ iowrite32(ring->lastWrite, tp->ioaddr + TYPHOON_REG_CMD_READY);
+ spin_unlock(&tp->command_lock);
+ }
+}
+
+static int
+typhoon_process_response(struct typhoon *tp, int resp_size,
+ struct resp_desc *resp_save)
+{
+ struct typhoon_indexes *indexes = tp->indexes;
+ struct resp_desc *resp;
+ u8 *base = tp->respRing.ringBase;
+ int count, len, wrap_len;
+ u32 cleared;
+ u32 ready;
+
+ cleared = le32_to_cpu(indexes->respCleared);
+ ready = le32_to_cpu(indexes->respReady);
+ while(cleared != ready) {
+ resp = (struct resp_desc *)(base + cleared);
+ count = resp->numDesc + 1;
+ if(resp_save && resp->seqNo) {
+ if(count > resp_size) {
+ resp_save->flags = TYPHOON_RESP_ERROR;
+ goto cleanup;
+ }
+
+ wrap_len = 0;
+ len = count * sizeof(*resp);
+ if(unlikely(cleared + len > RESPONSE_RING_SIZE)) {
+ wrap_len = cleared + len - RESPONSE_RING_SIZE;
+ len = RESPONSE_RING_SIZE - cleared;
+ }
+
+ memcpy(resp_save, resp, len);
+ if(unlikely(wrap_len)) {
+ resp_save += len / sizeof(*resp);
+ memcpy(resp_save, base, wrap_len);
+ }
+
+ resp_save = NULL;
+ } else if(resp->cmd == TYPHOON_CMD_READ_MEDIA_STATUS) {
+ typhoon_media_status(tp->dev, resp);
+ } else if(resp->cmd == TYPHOON_CMD_HELLO_RESP) {
+ typhoon_hello(tp);
+ } else {
+ netdev_err(tp->dev,
+ "dumping unexpected response 0x%04x:%d:0x%02x:0x%04x:%08x:%08x\n",
+ le16_to_cpu(resp->cmd),
+ resp->numDesc, resp->flags,
+ le16_to_cpu(resp->parm1),
+ le32_to_cpu(resp->parm2),
+ le32_to_cpu(resp->parm3));
+ }
+
+cleanup:
+ typhoon_inc_resp_index(&cleared, count);
+ }
+
+ indexes->respCleared = cpu_to_le32(cleared);
+ wmb();
+ return resp_save == NULL;
+}
+
+static inline int
+typhoon_num_free(int lastWrite, int lastRead, int ringSize)
+{
+ /* this works for all descriptors but rx_desc, as they are a
+ * different size than the cmd_desc -- everyone else is the same
+ */
+ lastWrite /= sizeof(struct cmd_desc);
+ lastRead /= sizeof(struct cmd_desc);
+ return (ringSize + lastRead - lastWrite - 1) % ringSize;
+}
+
+static inline int
+typhoon_num_free_cmd(struct typhoon *tp)
+{
+ int lastWrite = tp->cmdRing.lastWrite;
+ int cmdCleared = le32_to_cpu(tp->indexes->cmdCleared);
+
+ return typhoon_num_free(lastWrite, cmdCleared, COMMAND_ENTRIES);
+}
+
+static inline int
+typhoon_num_free_resp(struct typhoon *tp)
+{
+ int respReady = le32_to_cpu(tp->indexes->respReady);
+ int respCleared = le32_to_cpu(tp->indexes->respCleared);
+
+ return typhoon_num_free(respReady, respCleared, RESPONSE_ENTRIES);
+}
+
+static inline int
+typhoon_num_free_tx(struct transmit_ring *ring)
+{
+ /* if we start using the Hi Tx ring, this needs updating */
+ return typhoon_num_free(ring->lastWrite, ring->lastRead, TXLO_ENTRIES);
+}
+
+static int
+typhoon_issue_command(struct typhoon *tp, int num_cmd, struct cmd_desc *cmd,
+ int num_resp, struct resp_desc *resp)
+{
+ struct typhoon_indexes *indexes = tp->indexes;
+ struct basic_ring *ring = &tp->cmdRing;
+ struct resp_desc local_resp;
+ int i, err = 0;
+ int got_resp;
+ int freeCmd, freeResp;
+ int len, wrap_len;
+
+ spin_lock(&tp->command_lock);
+
+ freeCmd = typhoon_num_free_cmd(tp);
+ freeResp = typhoon_num_free_resp(tp);
+
+ if(freeCmd < num_cmd || freeResp < num_resp) {
+ netdev_err(tp->dev, "no descs for cmd, had (needed) %d (%d) cmd, %d (%d) resp\n",
+ freeCmd, num_cmd, freeResp, num_resp);
+ err = -ENOMEM;
+ goto out;
+ }
+
+ if(cmd->flags & TYPHOON_CMD_RESPOND) {
+ /* If we're expecting a response, but the caller hasn't given
+ * us a place to put it, we'll provide one.
+ */
+ tp->awaiting_resp = 1;
+ if(resp == NULL) {
+ resp = &local_resp;
+ num_resp = 1;
+ }
+ }
+
+ wrap_len = 0;
+ len = num_cmd * sizeof(*cmd);
+ if(unlikely(ring->lastWrite + len > COMMAND_RING_SIZE)) {
+ wrap_len = ring->lastWrite + len - COMMAND_RING_SIZE;
+ len = COMMAND_RING_SIZE - ring->lastWrite;
+ }
+
+ memcpy(ring->ringBase + ring->lastWrite, cmd, len);
+ if(unlikely(wrap_len)) {
+ struct cmd_desc *wrap_ptr = cmd;
+ wrap_ptr += len / sizeof(*cmd);
+ memcpy(ring->ringBase, wrap_ptr, wrap_len);
+ }
+
+ typhoon_inc_cmd_index(&ring->lastWrite, num_cmd);
+
+ /* "I feel a presence... another warrior is on the mesa."
+ */
+ wmb();
+ iowrite32(ring->lastWrite, tp->ioaddr + TYPHOON_REG_CMD_READY);
+ typhoon_post_pci_writes(tp->ioaddr);
+
+ if((cmd->flags & TYPHOON_CMD_RESPOND) == 0)
+ goto out;
+
+ /* Ugh. We'll be here about 8ms, spinning our thumbs, unable to
+ * preempt or do anything other than take interrupts. So, don't
+ * wait for a response unless you have to.
+ *
+ * I've thought about trying to sleep here, but we're called
+ * from many contexts that don't allow that. Also, given the way
+ * 3Com has implemented irq coalescing, we would likely timeout --
+ * this has been observed in real life!
+ *
+ * The big killer is we have to wait to get stats from the card,
+ * though we could go to a periodic refresh of those if we don't
+ * mind them getting somewhat stale. The rest of the waiting
+ * commands occur during open/close/suspend/resume, so they aren't
+ * time critical. Creating SAs in the future will also have to
+ * wait here.
+ */
+ got_resp = 0;
+ for(i = 0; i < TYPHOON_WAIT_TIMEOUT && !got_resp; i++) {
+ if(indexes->respCleared != indexes->respReady)
+ got_resp = typhoon_process_response(tp, num_resp,
+ resp);
+ udelay(TYPHOON_UDELAY);
+ }
+
+ if(!got_resp) {
+ err = -ETIMEDOUT;
+ goto out;
+ }
+
+ /* Collect the error response even if we don't care about the
+ * rest of the response
+ */
+ if(resp->flags & TYPHOON_RESP_ERROR)
+ err = -EIO;
+
+out:
+ if(tp->awaiting_resp) {
+ tp->awaiting_resp = 0;
+ smp_wmb();
+
+ /* Ugh. If a response was added to the ring between
+ * the call to typhoon_process_response() and the clearing
+ * of tp->awaiting_resp, we could have missed the interrupt
+ * and it could hang in the ring an indeterminate amount of
+ * time. So, check for it, and interrupt ourselves if this
+ * is the case.
+ */
+ if(indexes->respCleared != indexes->respReady)
+ iowrite32(1, tp->ioaddr + TYPHOON_REG_SELF_INTERRUPT);
+ }
+
+ spin_unlock(&tp->command_lock);
+ return err;
+}
+
+static inline void
+typhoon_tso_fill(struct sk_buff *skb, struct transmit_ring *txRing,
+ u32 ring_dma)
+{
+ struct tcpopt_desc *tcpd;
+ u32 tcpd_offset = ring_dma;
+
+ tcpd = (struct tcpopt_desc *) (txRing->ringBase + txRing->lastWrite);
+ tcpd_offset += txRing->lastWrite;
+ tcpd_offset += offsetof(struct tcpopt_desc, bytesTx);
+ typhoon_inc_tx_index(&txRing->lastWrite, 1);
+
+ tcpd->flags = TYPHOON_OPT_DESC | TYPHOON_OPT_TCP_SEG;
+ tcpd->numDesc = 1;
+ tcpd->mss_flags = cpu_to_le16(skb_tso_size(skb));
+ tcpd->mss_flags |= TYPHOON_TSO_FIRST | TYPHOON_TSO_LAST;
+ tcpd->respAddrLo = cpu_to_le32(tcpd_offset);
+ tcpd->bytesTx = cpu_to_le32(skb->len);
+ tcpd->status = 0;
+}
+
+static netdev_tx_t
+typhoon_start_tx(struct sk_buff *skb, struct net_device *dev)
+{
+ struct typhoon *tp = netdev_priv(dev);
+ struct transmit_ring *txRing;
+ struct tx_desc *txd, *first_txd;
+ dma_addr_t skb_dma;
+ int numDesc;
+
+ /* we have two rings to choose from, but we only use txLo for now
+ * If we start using the Hi ring as well, we'll need to update
+ * typhoon_stop_runtime(), typhoon_interrupt(), typhoon_num_free_tx(),
+ * and TXHI_ENTRIES to match, as well as update the TSO code below
+ * to get the right DMA address
+ */
+ txRing = &tp->txLoRing;
+
+ /* We need one descriptor for each fragment of the sk_buff, plus the
+ * one for the ->data area of it.
+ *
+ * The docs say a maximum of 16 fragment descriptors per TCP option
+ * descriptor, then make a new packet descriptor and option descriptor
+ * for the next 16 fragments. The engineers say just an option
+ * descriptor is needed. I've tested up to 26 fragments with a single
+ * packet descriptor/option descriptor combo, so I use that for now.
+ *
+ * If problems develop with TSO, check this first.
+ */
+ numDesc = skb_shinfo(skb)->nr_frags + 1;
+ if (skb_is_gso(skb))
+ numDesc++;
+
+ /* When checking for free space in the ring, we need to also
+ * account for the initial Tx descriptor, and we always must leave
+ * at least one descriptor unused in the ring so that it doesn't
+ * wrap and look empty.
+ *
+ * The only time we should loop here is when we hit the race
+ * between marking the queue awake and updating the cleared index.
+ * Just loop and it will appear. This comes from the acenic driver.
+ */
+ while(unlikely(typhoon_num_free_tx(txRing) < (numDesc + 2)))
+ smp_rmb();
+
+ first_txd = (struct tx_desc *) (txRing->ringBase + txRing->lastWrite);
+ typhoon_inc_tx_index(&txRing->lastWrite, 1);
+
+ first_txd->flags = TYPHOON_TX_DESC | TYPHOON_DESC_VALID;
+ first_txd->numDesc = 0;
+ first_txd->len = 0;
+ first_txd->tx_addr = (u64)((unsigned long) skb);
+ first_txd->processFlags = 0;
+
+ if(skb->ip_summed == CHECKSUM_PARTIAL) {
+ /* The 3XP will figure out if this is UDP/TCP */
+ first_txd->processFlags |= TYPHOON_TX_PF_TCP_CHKSUM;
+ first_txd->processFlags |= TYPHOON_TX_PF_UDP_CHKSUM;
+ first_txd->processFlags |= TYPHOON_TX_PF_IP_CHKSUM;
+ }
+
+ if(vlan_tx_tag_present(skb)) {
+ first_txd->processFlags |=
+ TYPHOON_TX_PF_INSERT_VLAN | TYPHOON_TX_PF_VLAN_PRIORITY;
+ first_txd->processFlags |=
+ cpu_to_le32(htons(vlan_tx_tag_get(skb)) <<
+ TYPHOON_TX_PF_VLAN_TAG_SHIFT);
+ }
+
+ if (skb_is_gso(skb)) {
+ first_txd->processFlags |= TYPHOON_TX_PF_TCP_SEGMENT;
+ first_txd->numDesc++;
+
+ typhoon_tso_fill(skb, txRing, tp->txlo_dma_addr);
+ }
+
+ txd = (struct tx_desc *) (txRing->ringBase + txRing->lastWrite);
+ typhoon_inc_tx_index(&txRing->lastWrite, 1);
+
+ /* No need to worry about padding packet -- the firmware pads
+ * it with zeros to ETH_ZLEN for us.
+ */
+ if(skb_shinfo(skb)->nr_frags == 0) {
+ skb_dma = pci_map_single(tp->tx_pdev, skb->data, skb->len,
+ PCI_DMA_TODEVICE);
+ txd->flags = TYPHOON_FRAG_DESC | TYPHOON_DESC_VALID;
+ txd->len = cpu_to_le16(skb->len);
+ txd->frag.addr = cpu_to_le32(skb_dma);
+ txd->frag.addrHi = 0;
+ first_txd->numDesc++;
+ } else {
+ int i, len;
+
+ len = skb_headlen(skb);
+ skb_dma = pci_map_single(tp->tx_pdev, skb->data, len,
+ PCI_DMA_TODEVICE);
+ txd->flags = TYPHOON_FRAG_DESC | TYPHOON_DESC_VALID;
+ txd->len = cpu_to_le16(len);
+ txd->frag.addr = cpu_to_le32(skb_dma);
+ txd->frag.addrHi = 0;
+ first_txd->numDesc++;
+
+ for(i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ void *frag_addr;
+
+ txd = (struct tx_desc *) (txRing->ringBase +
+ txRing->lastWrite);
+ typhoon_inc_tx_index(&txRing->lastWrite, 1);
+
+ len = frag->size;
+ frag_addr = (void *) page_address(frag->page) +
+ frag->page_offset;
+ skb_dma = pci_map_single(tp->tx_pdev, frag_addr, len,
+ PCI_DMA_TODEVICE);
+ txd->flags = TYPHOON_FRAG_DESC | TYPHOON_DESC_VALID;
+ txd->len = cpu_to_le16(len);
+ txd->frag.addr = cpu_to_le32(skb_dma);
+ txd->frag.addrHi = 0;
+ first_txd->numDesc++;
+ }
+ }
+
+ /* Kick the 3XP
+ */
+ wmb();
+ iowrite32(txRing->lastWrite, tp->tx_ioaddr + txRing->writeRegister);
+
+ /* If we don't have room to put the worst case packet on the
+ * queue, then we must stop the queue. We need 2 extra
+ * descriptors -- one to prevent ring wrap, and one for the
+ * Tx header.
+ */
+ numDesc = MAX_SKB_FRAGS + TSO_NUM_DESCRIPTORS + 1;
+
+ if(typhoon_num_free_tx(txRing) < (numDesc + 2)) {
+ netif_stop_queue(dev);
+
+ /* A Tx complete IRQ could have gotten between, making
+ * the ring free again. Only need to recheck here, since
+ * Tx is serialized.
+ */
+ if(typhoon_num_free_tx(txRing) >= (numDesc + 2))
+ netif_wake_queue(dev);
+ }
+
+ return NETDEV_TX_OK;
+}
+
+static void
+typhoon_set_rx_mode(struct net_device *dev)
+{
+ struct typhoon *tp = netdev_priv(dev);
+ struct cmd_desc xp_cmd;
+ u32 mc_filter[2];
+ __le16 filter;
+
+ filter = TYPHOON_RX_FILTER_DIRECTED | TYPHOON_RX_FILTER_BROADCAST;
+ if(dev->flags & IFF_PROMISC) {
+ filter |= TYPHOON_RX_FILTER_PROMISCOUS;
+ } else if ((netdev_mc_count(dev) > multicast_filter_limit) ||
+ (dev->flags & IFF_ALLMULTI)) {
+ /* Too many to match, or accept all multicasts. */
+ filter |= TYPHOON_RX_FILTER_ALL_MCAST;
+ } else if (!netdev_mc_empty(dev)) {
+ struct netdev_hw_addr *ha;
+
+ memset(mc_filter, 0, sizeof(mc_filter));
+ netdev_for_each_mc_addr(ha, dev) {
+ int bit = ether_crc(ETH_ALEN, ha->addr) & 0x3f;
+ mc_filter[bit >> 5] |= 1 << (bit & 0x1f);
+ }
+
+ INIT_COMMAND_NO_RESPONSE(&xp_cmd,
+ TYPHOON_CMD_SET_MULTICAST_HASH);
+ xp_cmd.parm1 = TYPHOON_MCAST_HASH_SET;
+ xp_cmd.parm2 = cpu_to_le32(mc_filter[0]);
+ xp_cmd.parm3 = cpu_to_le32(mc_filter[1]);
+ typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
+
+ filter |= TYPHOON_RX_FILTER_MCAST_HASH;
+ }
+
+ INIT_COMMAND_WITH_RESPONSE(&xp_cmd, TYPHOON_CMD_SET_RX_FILTER);
+ xp_cmd.parm1 = filter;
+ typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
+}
+
+static int
+typhoon_do_get_stats(struct typhoon *tp)
+{
+ struct net_device_stats *stats = &tp->stats;
+ struct net_device_stats *saved = &tp->stats_saved;
+ struct cmd_desc xp_cmd;
+ struct resp_desc xp_resp[7];
+ struct stats_resp *s = (struct stats_resp *) xp_resp;
+ int err;
+
+ INIT_COMMAND_WITH_RESPONSE(&xp_cmd, TYPHOON_CMD_READ_STATS);
+ err = typhoon_issue_command(tp, 1, &xp_cmd, 7, xp_resp);
+ if(err < 0)
+ return err;
+
+ /* 3Com's Linux driver uses txMultipleCollisions as it's
+ * collisions value, but there is some other collision info as well...
+ *
+ * The extra status reported would be a good candidate for
+ * ethtool_ops->get_{strings,stats}()
+ */
+ stats->tx_packets = le32_to_cpu(s->txPackets) +
+ saved->tx_packets;
+ stats->tx_bytes = le64_to_cpu(s->txBytes) +
+ saved->tx_bytes;
+ stats->tx_errors = le32_to_cpu(s->txCarrierLost) +
+ saved->tx_errors;
+ stats->tx_carrier_errors = le32_to_cpu(s->txCarrierLost) +
+ saved->tx_carrier_errors;
+ stats->collisions = le32_to_cpu(s->txMultipleCollisions) +
+ saved->collisions;
+ stats->rx_packets = le32_to_cpu(s->rxPacketsGood) +
+ saved->rx_packets;
+ stats->rx_bytes = le64_to_cpu(s->rxBytesGood) +
+ saved->rx_bytes;
+ stats->rx_fifo_errors = le32_to_cpu(s->rxFifoOverruns) +
+ saved->rx_fifo_errors;
+ stats->rx_errors = le32_to_cpu(s->rxFifoOverruns) +
+ le32_to_cpu(s->BadSSD) + le32_to_cpu(s->rxCrcErrors) +
+ saved->rx_errors;
+ stats->rx_crc_errors = le32_to_cpu(s->rxCrcErrors) +
+ saved->rx_crc_errors;
+ stats->rx_length_errors = le32_to_cpu(s->rxOversized) +
+ saved->rx_length_errors;
+ tp->speed = (s->linkStatus & TYPHOON_LINK_100MBPS) ?
+ SPEED_100 : SPEED_10;
+ tp->duplex = (s->linkStatus & TYPHOON_LINK_FULL_DUPLEX) ?
+ DUPLEX_FULL : DUPLEX_HALF;
+
+ return 0;
+}
+
+static struct net_device_stats *
+typhoon_get_stats(struct net_device *dev)
+{
+ struct typhoon *tp = netdev_priv(dev);
+ struct net_device_stats *stats = &tp->stats;
+ struct net_device_stats *saved = &tp->stats_saved;
+
+ smp_rmb();
+ if(tp->card_state == Sleeping)
+ return saved;
+
+ if(typhoon_do_get_stats(tp) < 0) {
+ netdev_err(dev, "error getting stats\n");
+ return saved;
+ }
+
+ return stats;
+}
+
+static int
+typhoon_set_mac_address(struct net_device *dev, void *addr)
+{
+ struct sockaddr *saddr = (struct sockaddr *) addr;
+
+ if(netif_running(dev))
+ return -EBUSY;
+
+ memcpy(dev->dev_addr, saddr->sa_data, dev->addr_len);
+ return 0;
+}
+
+static void
+typhoon_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
+{
+ struct typhoon *tp = netdev_priv(dev);
+ struct pci_dev *pci_dev = tp->pdev;
+ struct cmd_desc xp_cmd;
+ struct resp_desc xp_resp[3];
+
+ smp_rmb();
+ if(tp->card_state == Sleeping) {
+ strcpy(info->fw_version, "Sleep image");
+ } else {
+ INIT_COMMAND_WITH_RESPONSE(&xp_cmd, TYPHOON_CMD_READ_VERSIONS);
+ if(typhoon_issue_command(tp, 1, &xp_cmd, 3, xp_resp) < 0) {
+ strcpy(info->fw_version, "Unknown runtime");
+ } else {
+ u32 sleep_ver = le32_to_cpu(xp_resp[0].parm2);
+ snprintf(info->fw_version, 32, "%02x.%03x.%03x",
+ sleep_ver >> 24, (sleep_ver >> 12) & 0xfff,
+ sleep_ver & 0xfff);
+ }
+ }
+
+ strcpy(info->driver, KBUILD_MODNAME);
+ strcpy(info->bus_info, pci_name(pci_dev));
+}
+
+static int
+typhoon_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct typhoon *tp = netdev_priv(dev);
+
+ cmd->supported = SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full |
+ SUPPORTED_Autoneg;
+
+ switch (tp->xcvr_select) {
+ case TYPHOON_XCVR_10HALF:
+ cmd->advertising = ADVERTISED_10baseT_Half;
+ break;
+ case TYPHOON_XCVR_10FULL:
+ cmd->advertising = ADVERTISED_10baseT_Full;
+ break;
+ case TYPHOON_XCVR_100HALF:
+ cmd->advertising = ADVERTISED_100baseT_Half;
+ break;
+ case TYPHOON_XCVR_100FULL:
+ cmd->advertising = ADVERTISED_100baseT_Full;
+ break;
+ case TYPHOON_XCVR_AUTONEG:
+ cmd->advertising = ADVERTISED_10baseT_Half |
+ ADVERTISED_10baseT_Full |
+ ADVERTISED_100baseT_Half |
+ ADVERTISED_100baseT_Full |
+ ADVERTISED_Autoneg;
+ break;
+ }
+
+ if(tp->capabilities & TYPHOON_FIBER) {
+ cmd->supported |= SUPPORTED_FIBRE;
+ cmd->advertising |= ADVERTISED_FIBRE;
+ cmd->port = PORT_FIBRE;
+ } else {
+ cmd->supported |= SUPPORTED_10baseT_Half |
+ SUPPORTED_10baseT_Full |
+ SUPPORTED_TP;
+ cmd->advertising |= ADVERTISED_TP;
+ cmd->port = PORT_TP;
+ }
+
+ /* need to get stats to make these link speed/duplex valid */
+ typhoon_do_get_stats(tp);
+ ethtool_cmd_speed_set(cmd, tp->speed);
+ cmd->duplex = tp->duplex;
+ cmd->phy_address = 0;
+ cmd->transceiver = XCVR_INTERNAL;
+ if(tp->xcvr_select == TYPHOON_XCVR_AUTONEG)
+ cmd->autoneg = AUTONEG_ENABLE;
+ else
+ cmd->autoneg = AUTONEG_DISABLE;
+ cmd->maxtxpkt = 1;
+ cmd->maxrxpkt = 1;
+
+ return 0;
+}
+
+static int
+typhoon_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct typhoon *tp = netdev_priv(dev);
+ u32 speed = ethtool_cmd_speed(cmd);
+ struct cmd_desc xp_cmd;
+ __le16 xcvr;
+ int err;
+
+ err = -EINVAL;
+ if (cmd->autoneg == AUTONEG_ENABLE) {
+ xcvr = TYPHOON_XCVR_AUTONEG;
+ } else {
+ if (cmd->duplex == DUPLEX_HALF) {
+ if (speed == SPEED_10)
+ xcvr = TYPHOON_XCVR_10HALF;
+ else if (speed == SPEED_100)
+ xcvr = TYPHOON_XCVR_100HALF;
+ else
+ goto out;
+ } else if (cmd->duplex == DUPLEX_FULL) {
+ if (speed == SPEED_10)
+ xcvr = TYPHOON_XCVR_10FULL;
+ else if (speed == SPEED_100)
+ xcvr = TYPHOON_XCVR_100FULL;
+ else
+ goto out;
+ } else
+ goto out;
+ }
+
+ INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_XCVR_SELECT);
+ xp_cmd.parm1 = xcvr;
+ err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
+ if(err < 0)
+ goto out;
+
+ tp->xcvr_select = xcvr;
+ if(cmd->autoneg == AUTONEG_ENABLE) {
+ tp->speed = 0xff; /* invalid */
+ tp->duplex = 0xff; /* invalid */
+ } else {
+ tp->speed = speed;
+ tp->duplex = cmd->duplex;
+ }
+
+out:
+ return err;
+}
+
+static void
+typhoon_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+ struct typhoon *tp = netdev_priv(dev);
+
+ wol->supported = WAKE_PHY | WAKE_MAGIC;
+ wol->wolopts = 0;
+ if(tp->wol_events & TYPHOON_WAKE_LINK_EVENT)
+ wol->wolopts |= WAKE_PHY;
+ if(tp->wol_events & TYPHOON_WAKE_MAGIC_PKT)
+ wol->wolopts |= WAKE_MAGIC;
+ memset(&wol->sopass, 0, sizeof(wol->sopass));
+}
+
+static int
+typhoon_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+ struct typhoon *tp = netdev_priv(dev);
+
+ if(wol->wolopts & ~(WAKE_PHY | WAKE_MAGIC))
+ return -EINVAL;
+
+ tp->wol_events = 0;
+ if(wol->wolopts & WAKE_PHY)
+ tp->wol_events |= TYPHOON_WAKE_LINK_EVENT;
+ if(wol->wolopts & WAKE_MAGIC)
+ tp->wol_events |= TYPHOON_WAKE_MAGIC_PKT;
+
+ return 0;
+}
+
+static void
+typhoon_get_ringparam(struct net_device *dev, struct ethtool_ringparam *ering)
+{
+ ering->rx_max_pending = RXENT_ENTRIES;
+ ering->rx_mini_max_pending = 0;
+ ering->rx_jumbo_max_pending = 0;
+ ering->tx_max_pending = TXLO_ENTRIES - 1;
+
+ ering->rx_pending = RXENT_ENTRIES;
+ ering->rx_mini_pending = 0;
+ ering->rx_jumbo_pending = 0;
+ ering->tx_pending = TXLO_ENTRIES - 1;
+}
+
+static const struct ethtool_ops typhoon_ethtool_ops = {
+ .get_settings = typhoon_get_settings,
+ .set_settings = typhoon_set_settings,
+ .get_drvinfo = typhoon_get_drvinfo,
+ .get_wol = typhoon_get_wol,
+ .set_wol = typhoon_set_wol,
+ .get_link = ethtool_op_get_link,
+ .get_ringparam = typhoon_get_ringparam,
+};
+
+static int
+typhoon_wait_interrupt(void __iomem *ioaddr)
+{
+ int i, err = 0;
+
+ for(i = 0; i < TYPHOON_WAIT_TIMEOUT; i++) {
+ if(ioread32(ioaddr + TYPHOON_REG_INTR_STATUS) &
+ TYPHOON_INTR_BOOTCMD)
+ goto out;
+ udelay(TYPHOON_UDELAY);
+ }
+
+ err = -ETIMEDOUT;
+
+out:
+ iowrite32(TYPHOON_INTR_BOOTCMD, ioaddr + TYPHOON_REG_INTR_STATUS);
+ return err;
+}
+
+#define shared_offset(x) offsetof(struct typhoon_shared, x)
+
+static void
+typhoon_init_interface(struct typhoon *tp)
+{
+ struct typhoon_interface *iface = &tp->shared->iface;
+ dma_addr_t shared_dma;
+
+ memset(tp->shared, 0, sizeof(struct typhoon_shared));
+
+ /* The *Hi members of iface are all init'd to zero by the memset().
+ */
+ shared_dma = tp->shared_dma + shared_offset(indexes);
+ iface->ringIndex = cpu_to_le32(shared_dma);
+
+ shared_dma = tp->shared_dma + shared_offset(txLo);
+ iface->txLoAddr = cpu_to_le32(shared_dma);
+ iface->txLoSize = cpu_to_le32(TXLO_ENTRIES * sizeof(struct tx_desc));
+
+ shared_dma = tp->shared_dma + shared_offset(txHi);
+ iface->txHiAddr = cpu_to_le32(shared_dma);
+ iface->txHiSize = cpu_to_le32(TXHI_ENTRIES * sizeof(struct tx_desc));
+
+ shared_dma = tp->shared_dma + shared_offset(rxBuff);
+ iface->rxBuffAddr = cpu_to_le32(shared_dma);
+ iface->rxBuffSize = cpu_to_le32(RXFREE_ENTRIES *
+ sizeof(struct rx_free));
+
+ shared_dma = tp->shared_dma + shared_offset(rxLo);
+ iface->rxLoAddr = cpu_to_le32(shared_dma);
+ iface->rxLoSize = cpu_to_le32(RX_ENTRIES * sizeof(struct rx_desc));
+
+ shared_dma = tp->shared_dma + shared_offset(rxHi);
+ iface->rxHiAddr = cpu_to_le32(shared_dma);
+ iface->rxHiSize = cpu_to_le32(RX_ENTRIES * sizeof(struct rx_desc));
+
+ shared_dma = tp->shared_dma + shared_offset(cmd);
+ iface->cmdAddr = cpu_to_le32(shared_dma);
+ iface->cmdSize = cpu_to_le32(COMMAND_RING_SIZE);
+
+ shared_dma = tp->shared_dma + shared_offset(resp);
+ iface->respAddr = cpu_to_le32(shared_dma);
+ iface->respSize = cpu_to_le32(RESPONSE_RING_SIZE);
+
+ shared_dma = tp->shared_dma + shared_offset(zeroWord);
+ iface->zeroAddr = cpu_to_le32(shared_dma);
+
+ tp->indexes = &tp->shared->indexes;
+ tp->txLoRing.ringBase = (u8 *) tp->shared->txLo;
+ tp->txHiRing.ringBase = (u8 *) tp->shared->txHi;
+ tp->rxLoRing.ringBase = (u8 *) tp->shared->rxLo;
+ tp->rxHiRing.ringBase = (u8 *) tp->shared->rxHi;
+ tp->rxBuffRing.ringBase = (u8 *) tp->shared->rxBuff;
+ tp->cmdRing.ringBase = (u8 *) tp->shared->cmd;
+ tp->respRing.ringBase = (u8 *) tp->shared->resp;
+
+ tp->txLoRing.writeRegister = TYPHOON_REG_TX_LO_READY;
+ tp->txHiRing.writeRegister = TYPHOON_REG_TX_HI_READY;
+
+ tp->txlo_dma_addr = le32_to_cpu(iface->txLoAddr);
+ tp->card_state = Sleeping;
+
+ tp->offload = TYPHOON_OFFLOAD_IP_CHKSUM | TYPHOON_OFFLOAD_TCP_CHKSUM;
+ tp->offload |= TYPHOON_OFFLOAD_UDP_CHKSUM | TSO_OFFLOAD_ON;
+ tp->offload |= TYPHOON_OFFLOAD_VLAN;
+
+ spin_lock_init(&tp->command_lock);
+
+ /* Force the writes to the shared memory area out before continuing. */
+ wmb();
+}
+
+static void
+typhoon_init_rings(struct typhoon *tp)
+{
+ memset(tp->indexes, 0, sizeof(struct typhoon_indexes));
+
+ tp->txLoRing.lastWrite = 0;
+ tp->txHiRing.lastWrite = 0;
+ tp->rxLoRing.lastWrite = 0;
+ tp->rxHiRing.lastWrite = 0;
+ tp->rxBuffRing.lastWrite = 0;
+ tp->cmdRing.lastWrite = 0;
+ tp->respRing.lastWrite = 0;
+
+ tp->txLoRing.lastRead = 0;
+ tp->txHiRing.lastRead = 0;
+}
+
+static const struct firmware *typhoon_fw;
+
+static int
+typhoon_request_firmware(struct typhoon *tp)
+{
+ const struct typhoon_file_header *fHdr;
+ const struct typhoon_section_header *sHdr;
+ const u8 *image_data;
+ u32 numSections;
+ u32 section_len;
+ u32 remaining;
+ int err;
+
+ if (typhoon_fw)
+ return 0;
+
+ err = request_firmware(&typhoon_fw, FIRMWARE_NAME, &tp->pdev->dev);
+ if (err) {
+ netdev_err(tp->dev, "Failed to load firmware \"%s\"\n",
+ FIRMWARE_NAME);
+ return err;
+ }
+
+ image_data = (u8 *) typhoon_fw->data;
+ remaining = typhoon_fw->size;
+ if (remaining < sizeof(struct typhoon_file_header))
+ goto invalid_fw;
+
+ fHdr = (struct typhoon_file_header *) image_data;
+ if (memcmp(fHdr->tag, "TYPHOON", 8))
+ goto invalid_fw;
+
+ numSections = le32_to_cpu(fHdr->numSections);
+ image_data += sizeof(struct typhoon_file_header);
+ remaining -= sizeof(struct typhoon_file_header);
+
+ while (numSections--) {
+ if (remaining < sizeof(struct typhoon_section_header))
+ goto invalid_fw;
+
+ sHdr = (struct typhoon_section_header *) image_data;
+ image_data += sizeof(struct typhoon_section_header);
+ section_len = le32_to_cpu(sHdr->len);
+
+ if (remaining < section_len)
+ goto invalid_fw;
+
+ image_data += section_len;
+ remaining -= section_len;
+ }
+
+ return 0;
+
+invalid_fw:
+ netdev_err(tp->dev, "Invalid firmware image\n");
+ release_firmware(typhoon_fw);
+ typhoon_fw = NULL;
+ return -EINVAL;
+}
+
+static int
+typhoon_download_firmware(struct typhoon *tp)
+{
+ void __iomem *ioaddr = tp->ioaddr;
+ struct pci_dev *pdev = tp->pdev;
+ const struct typhoon_file_header *fHdr;
+ const struct typhoon_section_header *sHdr;
+ const u8 *image_data;
+ void *dpage;
+ dma_addr_t dpage_dma;
+ __sum16 csum;
+ u32 irqEnabled;
+ u32 irqMasked;
+ u32 numSections;
+ u32 section_len;
+ u32 len;
+ u32 load_addr;
+ u32 hmac;
+ int i;
+ int err;
+
+ image_data = (u8 *) typhoon_fw->data;
+ fHdr = (struct typhoon_file_header *) image_data;
+
+ /* Cannot just map the firmware image using pci_map_single() as
+ * the firmware is vmalloc()'d and may not be physically contiguous,
+ * so we allocate some consistent memory to copy the sections into.
+ */
+ err = -ENOMEM;
+ dpage = pci_alloc_consistent(pdev, PAGE_SIZE, &dpage_dma);
+ if(!dpage) {
+ netdev_err(tp->dev, "no DMA mem for firmware\n");
+ goto err_out;
+ }
+
+ irqEnabled = ioread32(ioaddr + TYPHOON_REG_INTR_ENABLE);
+ iowrite32(irqEnabled | TYPHOON_INTR_BOOTCMD,
+ ioaddr + TYPHOON_REG_INTR_ENABLE);
+ irqMasked = ioread32(ioaddr + TYPHOON_REG_INTR_MASK);
+ iowrite32(irqMasked | TYPHOON_INTR_BOOTCMD,
+ ioaddr + TYPHOON_REG_INTR_MASK);
+
+ err = -ETIMEDOUT;
+ if(typhoon_wait_status(ioaddr, TYPHOON_STATUS_WAITING_FOR_HOST) < 0) {
+ netdev_err(tp->dev, "card ready timeout\n");
+ goto err_out_irq;
+ }
+
+ numSections = le32_to_cpu(fHdr->numSections);
+ load_addr = le32_to_cpu(fHdr->startAddr);
+
+ iowrite32(TYPHOON_INTR_BOOTCMD, ioaddr + TYPHOON_REG_INTR_STATUS);
+ iowrite32(load_addr, ioaddr + TYPHOON_REG_DOWNLOAD_BOOT_ADDR);
+ hmac = le32_to_cpu(fHdr->hmacDigest[0]);
+ iowrite32(hmac, ioaddr + TYPHOON_REG_DOWNLOAD_HMAC_0);
+ hmac = le32_to_cpu(fHdr->hmacDigest[1]);
+ iowrite32(hmac, ioaddr + TYPHOON_REG_DOWNLOAD_HMAC_1);
+ hmac = le32_to_cpu(fHdr->hmacDigest[2]);
+ iowrite32(hmac, ioaddr + TYPHOON_REG_DOWNLOAD_HMAC_2);
+ hmac = le32_to_cpu(fHdr->hmacDigest[3]);
+ iowrite32(hmac, ioaddr + TYPHOON_REG_DOWNLOAD_HMAC_3);
+ hmac = le32_to_cpu(fHdr->hmacDigest[4]);
+ iowrite32(hmac, ioaddr + TYPHOON_REG_DOWNLOAD_HMAC_4);
+ typhoon_post_pci_writes(ioaddr);
+ iowrite32(TYPHOON_BOOTCMD_RUNTIME_IMAGE, ioaddr + TYPHOON_REG_COMMAND);
+
+ image_data += sizeof(struct typhoon_file_header);
+
+ /* The ioread32() in typhoon_wait_interrupt() will force the
+ * last write to the command register to post, so
+ * we don't need a typhoon_post_pci_writes() after it.
+ */
+ for(i = 0; i < numSections; i++) {
+ sHdr = (struct typhoon_section_header *) image_data;
+ image_data += sizeof(struct typhoon_section_header);
+ load_addr = le32_to_cpu(sHdr->startAddr);
+ section_len = le32_to_cpu(sHdr->len);
+
+ while(section_len) {
+ len = min_t(u32, section_len, PAGE_SIZE);
+
+ if(typhoon_wait_interrupt(ioaddr) < 0 ||
+ ioread32(ioaddr + TYPHOON_REG_STATUS) !=
+ TYPHOON_STATUS_WAITING_FOR_SEGMENT) {
+ netdev_err(tp->dev, "segment ready timeout\n");
+ goto err_out_irq;
+ }
+
+ /* Do an pseudo IPv4 checksum on the data -- first
+ * need to convert each u16 to cpu order before
+ * summing. Fortunately, due to the properties of
+ * the checksum, we can do this once, at the end.
+ */
+ csum = csum_fold(csum_partial_copy_nocheck(image_data,
+ dpage, len,
+ 0));
+
+ iowrite32(len, ioaddr + TYPHOON_REG_BOOT_LENGTH);
+ iowrite32(le16_to_cpu((__force __le16)csum),
+ ioaddr + TYPHOON_REG_BOOT_CHECKSUM);
+ iowrite32(load_addr,
+ ioaddr + TYPHOON_REG_BOOT_DEST_ADDR);
+ iowrite32(0, ioaddr + TYPHOON_REG_BOOT_DATA_HI);
+ iowrite32(dpage_dma, ioaddr + TYPHOON_REG_BOOT_DATA_LO);
+ typhoon_post_pci_writes(ioaddr);
+ iowrite32(TYPHOON_BOOTCMD_SEG_AVAILABLE,
+ ioaddr + TYPHOON_REG_COMMAND);
+
+ image_data += len;
+ load_addr += len;
+ section_len -= len;
+ }
+ }
+
+ if(typhoon_wait_interrupt(ioaddr) < 0 ||
+ ioread32(ioaddr + TYPHOON_REG_STATUS) !=
+ TYPHOON_STATUS_WAITING_FOR_SEGMENT) {
+ netdev_err(tp->dev, "final segment ready timeout\n");
+ goto err_out_irq;
+ }
+
+ iowrite32(TYPHOON_BOOTCMD_DNLD_COMPLETE, ioaddr + TYPHOON_REG_COMMAND);
+
+ if(typhoon_wait_status(ioaddr, TYPHOON_STATUS_WAITING_FOR_BOOT) < 0) {
+ netdev_err(tp->dev, "boot ready timeout, status 0x%0x\n",
+ ioread32(ioaddr + TYPHOON_REG_STATUS));
+ goto err_out_irq;
+ }
+
+ err = 0;
+
+err_out_irq:
+ iowrite32(irqMasked, ioaddr + TYPHOON_REG_INTR_MASK);
+ iowrite32(irqEnabled, ioaddr + TYPHOON_REG_INTR_ENABLE);
+
+ pci_free_consistent(pdev, PAGE_SIZE, dpage, dpage_dma);
+
+err_out:
+ return err;
+}
+
+static int
+typhoon_boot_3XP(struct typhoon *tp, u32 initial_status)
+{
+ void __iomem *ioaddr = tp->ioaddr;
+
+ if(typhoon_wait_status(ioaddr, initial_status) < 0) {
+ netdev_err(tp->dev, "boot ready timeout\n");
+ goto out_timeout;
+ }
+
+ iowrite32(0, ioaddr + TYPHOON_REG_BOOT_RECORD_ADDR_HI);
+ iowrite32(tp->shared_dma, ioaddr + TYPHOON_REG_BOOT_RECORD_ADDR_LO);
+ typhoon_post_pci_writes(ioaddr);
+ iowrite32(TYPHOON_BOOTCMD_REG_BOOT_RECORD,
+ ioaddr + TYPHOON_REG_COMMAND);
+
+ if(typhoon_wait_status(ioaddr, TYPHOON_STATUS_RUNNING) < 0) {
+ netdev_err(tp->dev, "boot finish timeout (status 0x%x)\n",
+ ioread32(ioaddr + TYPHOON_REG_STATUS));
+ goto out_timeout;
+ }
+
+ /* Clear the Transmit and Command ready registers
+ */
+ iowrite32(0, ioaddr + TYPHOON_REG_TX_HI_READY);
+ iowrite32(0, ioaddr + TYPHOON_REG_CMD_READY);
+ iowrite32(0, ioaddr + TYPHOON_REG_TX_LO_READY);
+ typhoon_post_pci_writes(ioaddr);
+ iowrite32(TYPHOON_BOOTCMD_BOOT, ioaddr + TYPHOON_REG_COMMAND);
+
+ return 0;
+
+out_timeout:
+ return -ETIMEDOUT;
+}
+
+static u32
+typhoon_clean_tx(struct typhoon *tp, struct transmit_ring *txRing,
+ volatile __le32 * index)
+{
+ u32 lastRead = txRing->lastRead;
+ struct tx_desc *tx;
+ dma_addr_t skb_dma;
+ int dma_len;
+ int type;
+
+ while(lastRead != le32_to_cpu(*index)) {
+ tx = (struct tx_desc *) (txRing->ringBase + lastRead);
+ type = tx->flags & TYPHOON_TYPE_MASK;
+
+ if(type == TYPHOON_TX_DESC) {
+ /* This tx_desc describes a packet.
+ */
+ unsigned long ptr = tx->tx_addr;
+ struct sk_buff *skb = (struct sk_buff *) ptr;
+ dev_kfree_skb_irq(skb);
+ } else if(type == TYPHOON_FRAG_DESC) {
+ /* This tx_desc describes a memory mapping. Free it.
+ */
+ skb_dma = (dma_addr_t) le32_to_cpu(tx->frag.addr);
+ dma_len = le16_to_cpu(tx->len);
+ pci_unmap_single(tp->pdev, skb_dma, dma_len,
+ PCI_DMA_TODEVICE);
+ }
+
+ tx->flags = 0;
+ typhoon_inc_tx_index(&lastRead, 1);
+ }
+
+ return lastRead;
+}
+
+static void
+typhoon_tx_complete(struct typhoon *tp, struct transmit_ring *txRing,
+ volatile __le32 * index)
+{
+ u32 lastRead;
+ int numDesc = MAX_SKB_FRAGS + 1;
+
+ /* This will need changing if we start to use the Hi Tx ring. */
+ lastRead = typhoon_clean_tx(tp, txRing, index);
+ if(netif_queue_stopped(tp->dev) && typhoon_num_free(txRing->lastWrite,
+ lastRead, TXLO_ENTRIES) > (numDesc + 2))
+ netif_wake_queue(tp->dev);
+
+ txRing->lastRead = lastRead;
+ smp_wmb();
+}
+
+static void
+typhoon_recycle_rx_skb(struct typhoon *tp, u32 idx)
+{
+ struct typhoon_indexes *indexes = tp->indexes;
+ struct rxbuff_ent *rxb = &tp->rxbuffers[idx];
+ struct basic_ring *ring = &tp->rxBuffRing;
+ struct rx_free *r;
+
+ if((ring->lastWrite + sizeof(*r)) % (RXFREE_ENTRIES * sizeof(*r)) ==
+ le32_to_cpu(indexes->rxBuffCleared)) {
+ /* no room in ring, just drop the skb
+ */
+ dev_kfree_skb_any(rxb->skb);
+ rxb->skb = NULL;
+ return;
+ }
+
+ r = (struct rx_free *) (ring->ringBase + ring->lastWrite);
+ typhoon_inc_rxfree_index(&ring->lastWrite, 1);
+ r->virtAddr = idx;
+ r->physAddr = cpu_to_le32(rxb->dma_addr);
+
+ /* Tell the card about it */
+ wmb();
+ indexes->rxBuffReady = cpu_to_le32(ring->lastWrite);
+}
+
+static int
+typhoon_alloc_rx_skb(struct typhoon *tp, u32 idx)
+{
+ struct typhoon_indexes *indexes = tp->indexes;
+ struct rxbuff_ent *rxb = &tp->rxbuffers[idx];
+ struct basic_ring *ring = &tp->rxBuffRing;
+ struct rx_free *r;
+ struct sk_buff *skb;
+ dma_addr_t dma_addr;
+
+ rxb->skb = NULL;
+
+ if((ring->lastWrite + sizeof(*r)) % (RXFREE_ENTRIES * sizeof(*r)) ==
+ le32_to_cpu(indexes->rxBuffCleared))
+ return -ENOMEM;
+
+ skb = dev_alloc_skb(PKT_BUF_SZ);
+ if(!skb)
+ return -ENOMEM;
+
+#if 0
+ /* Please, 3com, fix the firmware to allow DMA to a unaligned
+ * address! Pretty please?
+ */
+ skb_reserve(skb, 2);
+#endif
+
+ skb->dev = tp->dev;
+ dma_addr = pci_map_single(tp->pdev, skb->data,
+ PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
+
+ /* Since no card does 64 bit DAC, the high bits will never
+ * change from zero.
+ */
+ r = (struct rx_free *) (ring->ringBase + ring->lastWrite);
+ typhoon_inc_rxfree_index(&ring->lastWrite, 1);
+ r->virtAddr = idx;
+ r->physAddr = cpu_to_le32(dma_addr);
+ rxb->skb = skb;
+ rxb->dma_addr = dma_addr;
+
+ /* Tell the card about it */
+ wmb();
+ indexes->rxBuffReady = cpu_to_le32(ring->lastWrite);
+ return 0;
+}
+
+static int
+typhoon_rx(struct typhoon *tp, struct basic_ring *rxRing, volatile __le32 * ready,
+ volatile __le32 * cleared, int budget)
+{
+ struct rx_desc *rx;
+ struct sk_buff *skb, *new_skb;
+ struct rxbuff_ent *rxb;
+ dma_addr_t dma_addr;
+ u32 local_ready;
+ u32 rxaddr;
+ int pkt_len;
+ u32 idx;
+ __le32 csum_bits;
+ int received;
+
+ received = 0;
+ local_ready = le32_to_cpu(*ready);
+ rxaddr = le32_to_cpu(*cleared);
+ while(rxaddr != local_ready && budget > 0) {
+ rx = (struct rx_desc *) (rxRing->ringBase + rxaddr);
+ idx = rx->addr;
+ rxb = &tp->rxbuffers[idx];
+ skb = rxb->skb;
+ dma_addr = rxb->dma_addr;
+
+ typhoon_inc_rx_index(&rxaddr, 1);
+
+ if(rx->flags & TYPHOON_RX_ERROR) {
+ typhoon_recycle_rx_skb(tp, idx);
+ continue;
+ }
+
+ pkt_len = le16_to_cpu(rx->frameLen);
+
+ if(pkt_len < rx_copybreak &&
+ (new_skb = dev_alloc_skb(pkt_len + 2)) != NULL) {
+ skb_reserve(new_skb, 2);
+ pci_dma_sync_single_for_cpu(tp->pdev, dma_addr,
+ PKT_BUF_SZ,
+ PCI_DMA_FROMDEVICE);
+ skb_copy_to_linear_data(new_skb, skb->data, pkt_len);
+ pci_dma_sync_single_for_device(tp->pdev, dma_addr,
+ PKT_BUF_SZ,
+ PCI_DMA_FROMDEVICE);
+ skb_put(new_skb, pkt_len);
+ typhoon_recycle_rx_skb(tp, idx);
+ } else {
+ new_skb = skb;
+ skb_put(new_skb, pkt_len);
+ pci_unmap_single(tp->pdev, dma_addr, PKT_BUF_SZ,
+ PCI_DMA_FROMDEVICE);
+ typhoon_alloc_rx_skb(tp, idx);
+ }
+ new_skb->protocol = eth_type_trans(new_skb, tp->dev);
+ csum_bits = rx->rxStatus & (TYPHOON_RX_IP_CHK_GOOD |
+ TYPHOON_RX_UDP_CHK_GOOD | TYPHOON_RX_TCP_CHK_GOOD);
+ if(csum_bits ==
+ (TYPHOON_RX_IP_CHK_GOOD | TYPHOON_RX_TCP_CHK_GOOD) ||
+ csum_bits ==
+ (TYPHOON_RX_IP_CHK_GOOD | TYPHOON_RX_UDP_CHK_GOOD)) {
+ new_skb->ip_summed = CHECKSUM_UNNECESSARY;
+ } else
+ skb_checksum_none_assert(new_skb);
+
+ if (rx->rxStatus & TYPHOON_RX_VLAN)
+ __vlan_hwaccel_put_tag(new_skb,
+ ntohl(rx->vlanTag) & 0xffff);
+ netif_receive_skb(new_skb);
+
+ received++;
+ budget--;
+ }
+ *cleared = cpu_to_le32(rxaddr);
+
+ return received;
+}
+
+static void
+typhoon_fill_free_ring(struct typhoon *tp)
+{
+ u32 i;
+
+ for(i = 0; i < RXENT_ENTRIES; i++) {
+ struct rxbuff_ent *rxb = &tp->rxbuffers[i];
+ if(rxb->skb)
+ continue;
+ if(typhoon_alloc_rx_skb(tp, i) < 0)
+ break;
+ }
+}
+
+static int
+typhoon_poll(struct napi_struct *napi, int budget)
+{
+ struct typhoon *tp = container_of(napi, struct typhoon, napi);
+ struct typhoon_indexes *indexes = tp->indexes;
+ int work_done;
+
+ rmb();
+ if(!tp->awaiting_resp && indexes->respReady != indexes->respCleared)
+ typhoon_process_response(tp, 0, NULL);
+
+ if(le32_to_cpu(indexes->txLoCleared) != tp->txLoRing.lastRead)
+ typhoon_tx_complete(tp, &tp->txLoRing, &indexes->txLoCleared);
+
+ work_done = 0;
+
+ if(indexes->rxHiCleared != indexes->rxHiReady) {
+ work_done += typhoon_rx(tp, &tp->rxHiRing, &indexes->rxHiReady,
+ &indexes->rxHiCleared, budget);
+ }
+
+ if(indexes->rxLoCleared != indexes->rxLoReady) {
+ work_done += typhoon_rx(tp, &tp->rxLoRing, &indexes->rxLoReady,
+ &indexes->rxLoCleared, budget - work_done);
+ }
+
+ if(le32_to_cpu(indexes->rxBuffCleared) == tp->rxBuffRing.lastWrite) {
+ /* rxBuff ring is empty, try to fill it. */
+ typhoon_fill_free_ring(tp);
+ }
+
+ if (work_done < budget) {
+ napi_complete(napi);
+ iowrite32(TYPHOON_INTR_NONE,
+ tp->ioaddr + TYPHOON_REG_INTR_MASK);
+ typhoon_post_pci_writes(tp->ioaddr);
+ }
+
+ return work_done;
+}
+
+static irqreturn_t
+typhoon_interrupt(int irq, void *dev_instance)
+{
+ struct net_device *dev = dev_instance;
+ struct typhoon *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->ioaddr;
+ u32 intr_status;
+
+ intr_status = ioread32(ioaddr + TYPHOON_REG_INTR_STATUS);
+ if(!(intr_status & TYPHOON_INTR_HOST_INT))
+ return IRQ_NONE;
+
+ iowrite32(intr_status, ioaddr + TYPHOON_REG_INTR_STATUS);
+
+ if (napi_schedule_prep(&tp->napi)) {
+ iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_MASK);
+ typhoon_post_pci_writes(ioaddr);
+ __napi_schedule(&tp->napi);
+ } else {
+ netdev_err(dev, "Error, poll already scheduled\n");
+ }
+ return IRQ_HANDLED;
+}
+
+static void
+typhoon_free_rx_rings(struct typhoon *tp)
+{
+ u32 i;
+
+ for(i = 0; i < RXENT_ENTRIES; i++) {
+ struct rxbuff_ent *rxb = &tp->rxbuffers[i];
+ if(rxb->skb) {
+ pci_unmap_single(tp->pdev, rxb->dma_addr, PKT_BUF_SZ,
+ PCI_DMA_FROMDEVICE);
+ dev_kfree_skb(rxb->skb);
+ rxb->skb = NULL;
+ }
+ }
+}
+
+static int
+typhoon_sleep(struct typhoon *tp, pci_power_t state, __le16 events)
+{
+ struct pci_dev *pdev = tp->pdev;
+ void __iomem *ioaddr = tp->ioaddr;
+ struct cmd_desc xp_cmd;
+ int err;
+
+ INIT_COMMAND_WITH_RESPONSE(&xp_cmd, TYPHOON_CMD_ENABLE_WAKE_EVENTS);
+ xp_cmd.parm1 = events;
+ err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
+ if(err < 0) {
+ netdev_err(tp->dev, "typhoon_sleep(): wake events cmd err %d\n",
+ err);
+ return err;
+ }
+
+ INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_GOTO_SLEEP);
+ err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
+ if(err < 0) {
+ netdev_err(tp->dev, "typhoon_sleep(): sleep cmd err %d\n", err);
+ return err;
+ }
+
+ if(typhoon_wait_status(ioaddr, TYPHOON_STATUS_SLEEPING) < 0)
+ return -ETIMEDOUT;
+
+ /* Since we cannot monitor the status of the link while sleeping,
+ * tell the world it went away.
+ */
+ netif_carrier_off(tp->dev);
+
+ pci_enable_wake(tp->pdev, state, 1);
+ pci_disable_device(pdev);
+ return pci_set_power_state(pdev, state);
+}
+
+static int
+typhoon_wakeup(struct typhoon *tp, int wait_type)
+{
+ struct pci_dev *pdev = tp->pdev;
+ void __iomem *ioaddr = tp->ioaddr;
+
+ pci_set_power_state(pdev, PCI_D0);
+ pci_restore_state(pdev);
+
+ /* Post 2.x.x versions of the Sleep Image require a reset before
+ * we can download the Runtime Image. But let's not make users of
+ * the old firmware pay for the reset.
+ */
+ iowrite32(TYPHOON_BOOTCMD_WAKEUP, ioaddr + TYPHOON_REG_COMMAND);
+ if(typhoon_wait_status(ioaddr, TYPHOON_STATUS_WAITING_FOR_HOST) < 0 ||
+ (tp->capabilities & TYPHOON_WAKEUP_NEEDS_RESET))
+ return typhoon_reset(ioaddr, wait_type);
+
+ return 0;
+}
+
+static int
+typhoon_start_runtime(struct typhoon *tp)
+{
+ struct net_device *dev = tp->dev;
+ void __iomem *ioaddr = tp->ioaddr;
+ struct cmd_desc xp_cmd;
+ int err;
+
+ typhoon_init_rings(tp);
+ typhoon_fill_free_ring(tp);
+
+ err = typhoon_download_firmware(tp);
+ if(err < 0) {
+ netdev_err(tp->dev, "cannot load runtime on 3XP\n");
+ goto error_out;
+ }
+
+ if(typhoon_boot_3XP(tp, TYPHOON_STATUS_WAITING_FOR_BOOT) < 0) {
+ netdev_err(tp->dev, "cannot boot 3XP\n");
+ err = -EIO;
+ goto error_out;
+ }
+
+ INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_SET_MAX_PKT_SIZE);
+ xp_cmd.parm1 = cpu_to_le16(PKT_BUF_SZ);
+ err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
+ if(err < 0)
+ goto error_out;
+
+ INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_SET_MAC_ADDRESS);
+ xp_cmd.parm1 = cpu_to_le16(ntohs(*(__be16 *)&dev->dev_addr[0]));
+ xp_cmd.parm2 = cpu_to_le32(ntohl(*(__be32 *)&dev->dev_addr[2]));
+ err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
+ if(err < 0)
+ goto error_out;
+
+ /* Disable IRQ coalescing -- we can reenable it when 3Com gives
+ * us some more information on how to control it.
+ */
+ INIT_COMMAND_WITH_RESPONSE(&xp_cmd, TYPHOON_CMD_IRQ_COALESCE_CTRL);
+ xp_cmd.parm1 = 0;
+ err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
+ if(err < 0)
+ goto error_out;
+
+ INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_XCVR_SELECT);
+ xp_cmd.parm1 = tp->xcvr_select;
+ err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
+ if(err < 0)
+ goto error_out;
+
+ INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_VLAN_TYPE_WRITE);
+ xp_cmd.parm1 = cpu_to_le16(ETH_P_8021Q);
+ err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
+ if(err < 0)
+ goto error_out;
+
+ INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_SET_OFFLOAD_TASKS);
+ xp_cmd.parm2 = tp->offload;
+ xp_cmd.parm3 = tp->offload;
+ err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
+ if(err < 0)
+ goto error_out;
+
+ typhoon_set_rx_mode(dev);
+
+ INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_TX_ENABLE);
+ err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
+ if(err < 0)
+ goto error_out;
+
+ INIT_COMMAND_WITH_RESPONSE(&xp_cmd, TYPHOON_CMD_RX_ENABLE);
+ err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
+ if(err < 0)
+ goto error_out;
+
+ tp->card_state = Running;
+ smp_wmb();
+
+ iowrite32(TYPHOON_INTR_ENABLE_ALL, ioaddr + TYPHOON_REG_INTR_ENABLE);
+ iowrite32(TYPHOON_INTR_NONE, ioaddr + TYPHOON_REG_INTR_MASK);
+ typhoon_post_pci_writes(ioaddr);
+
+ return 0;
+
+error_out:
+ typhoon_reset(ioaddr, WaitNoSleep);
+ typhoon_free_rx_rings(tp);
+ typhoon_init_rings(tp);
+ return err;
+}
+
+static int
+typhoon_stop_runtime(struct typhoon *tp, int wait_type)
+{
+ struct typhoon_indexes *indexes = tp->indexes;
+ struct transmit_ring *txLo = &tp->txLoRing;
+ void __iomem *ioaddr = tp->ioaddr;
+ struct cmd_desc xp_cmd;
+ int i;
+
+ /* Disable interrupts early, since we can't schedule a poll
+ * when called with !netif_running(). This will be posted
+ * when we force the posting of the command.
+ */
+ iowrite32(TYPHOON_INTR_NONE, ioaddr + TYPHOON_REG_INTR_ENABLE);
+
+ INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_RX_DISABLE);
+ typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
+
+ /* Wait 1/2 sec for any outstanding transmits to occur
+ * We'll cleanup after the reset if this times out.
+ */
+ for(i = 0; i < TYPHOON_WAIT_TIMEOUT; i++) {
+ if(indexes->txLoCleared == cpu_to_le32(txLo->lastWrite))
+ break;
+ udelay(TYPHOON_UDELAY);
+ }
+
+ if(i == TYPHOON_WAIT_TIMEOUT)
+ netdev_err(tp->dev, "halt timed out waiting for Tx to complete\n");
+
+ INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_TX_DISABLE);
+ typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
+
+ /* save the statistics so when we bring the interface up again,
+ * the values reported to userspace are correct.
+ */
+ tp->card_state = Sleeping;
+ smp_wmb();
+ typhoon_do_get_stats(tp);
+ memcpy(&tp->stats_saved, &tp->stats, sizeof(struct net_device_stats));
+
+ INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_HALT);
+ typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
+
+ if(typhoon_wait_status(ioaddr, TYPHOON_STATUS_HALTED) < 0)
+ netdev_err(tp->dev, "timed out waiting for 3XP to halt\n");
+
+ if(typhoon_reset(ioaddr, wait_type) < 0) {
+ netdev_err(tp->dev, "unable to reset 3XP\n");
+ return -ETIMEDOUT;
+ }
+
+ /* cleanup any outstanding Tx packets */
+ if(indexes->txLoCleared != cpu_to_le32(txLo->lastWrite)) {
+ indexes->txLoCleared = cpu_to_le32(txLo->lastWrite);
+ typhoon_clean_tx(tp, &tp->txLoRing, &indexes->txLoCleared);
+ }
+
+ return 0;
+}
+
+static void
+typhoon_tx_timeout(struct net_device *dev)
+{
+ struct typhoon *tp = netdev_priv(dev);
+
+ if(typhoon_reset(tp->ioaddr, WaitNoSleep) < 0) {
+ netdev_warn(dev, "could not reset in tx timeout\n");
+ goto truly_dead;
+ }
+
+ /* If we ever start using the Hi ring, it will need cleaning too */
+ typhoon_clean_tx(tp, &tp->txLoRing, &tp->indexes->txLoCleared);
+ typhoon_free_rx_rings(tp);
+
+ if(typhoon_start_runtime(tp) < 0) {
+ netdev_err(dev, "could not start runtime in tx timeout\n");
+ goto truly_dead;
+ }
+
+ netif_wake_queue(dev);
+ return;
+
+truly_dead:
+ /* Reset the hardware, and turn off carrier to avoid more timeouts */
+ typhoon_reset(tp->ioaddr, NoWait);
+ netif_carrier_off(dev);
+}
+
+static int
+typhoon_open(struct net_device *dev)
+{
+ struct typhoon *tp = netdev_priv(dev);
+ int err;
+
+ err = typhoon_request_firmware(tp);
+ if (err)
+ goto out;
+
+ err = typhoon_wakeup(tp, WaitSleep);
+ if(err < 0) {
+ netdev_err(dev, "unable to wakeup device\n");
+ goto out_sleep;
+ }
+
+ err = request_irq(dev->irq, typhoon_interrupt, IRQF_SHARED,
+ dev->name, dev);
+ if(err < 0)
+ goto out_sleep;
+
+ napi_enable(&tp->napi);
+
+ err = typhoon_start_runtime(tp);
+ if(err < 0) {
+ napi_disable(&tp->napi);
+ goto out_irq;
+ }
+
+ netif_start_queue(dev);
+ return 0;
+
+out_irq:
+ free_irq(dev->irq, dev);
+
+out_sleep:
+ if(typhoon_boot_3XP(tp, TYPHOON_STATUS_WAITING_FOR_HOST) < 0) {
+ netdev_err(dev, "unable to reboot into sleep img\n");
+ typhoon_reset(tp->ioaddr, NoWait);
+ goto out;
+ }
+
+ if(typhoon_sleep(tp, PCI_D3hot, 0) < 0)
+ netdev_err(dev, "unable to go back to sleep\n");
+
+out:
+ return err;
+}
+
+static int
+typhoon_close(struct net_device *dev)
+{
+ struct typhoon *tp = netdev_priv(dev);
+
+ netif_stop_queue(dev);
+ napi_disable(&tp->napi);
+
+ if(typhoon_stop_runtime(tp, WaitSleep) < 0)
+ netdev_err(dev, "unable to stop runtime\n");
+
+ /* Make sure there is no irq handler running on a different CPU. */
+ free_irq(dev->irq, dev);
+
+ typhoon_free_rx_rings(tp);
+ typhoon_init_rings(tp);
+
+ if(typhoon_boot_3XP(tp, TYPHOON_STATUS_WAITING_FOR_HOST) < 0)
+ netdev_err(dev, "unable to boot sleep image\n");
+
+ if(typhoon_sleep(tp, PCI_D3hot, 0) < 0)
+ netdev_err(dev, "unable to put card to sleep\n");
+
+ return 0;
+}
+
+#ifdef CONFIG_PM
+static int
+typhoon_resume(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct typhoon *tp = netdev_priv(dev);
+
+ /* If we're down, resume when we are upped.
+ */
+ if(!netif_running(dev))
+ return 0;
+
+ if(typhoon_wakeup(tp, WaitNoSleep) < 0) {
+ netdev_err(dev, "critical: could not wake up in resume\n");
+ goto reset;
+ }
+
+ if(typhoon_start_runtime(tp) < 0) {
+ netdev_err(dev, "critical: could not start runtime in resume\n");
+ goto reset;
+ }
+
+ netif_device_attach(dev);
+ return 0;
+
+reset:
+ typhoon_reset(tp->ioaddr, NoWait);
+ return -EBUSY;
+}
+
+static int
+typhoon_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct typhoon *tp = netdev_priv(dev);
+ struct cmd_desc xp_cmd;
+
+ /* If we're down, we're already suspended.
+ */
+ if(!netif_running(dev))
+ return 0;
+
+ /* TYPHOON_OFFLOAD_VLAN is always on now, so this doesn't work */
+ if(tp->wol_events & TYPHOON_WAKE_MAGIC_PKT)
+ netdev_warn(dev, "cannot do WAKE_MAGIC with VLAN offloading\n");
+
+ netif_device_detach(dev);
+
+ if(typhoon_stop_runtime(tp, WaitNoSleep) < 0) {
+ netdev_err(dev, "unable to stop runtime\n");
+ goto need_resume;
+ }
+
+ typhoon_free_rx_rings(tp);
+ typhoon_init_rings(tp);
+
+ if(typhoon_boot_3XP(tp, TYPHOON_STATUS_WAITING_FOR_HOST) < 0) {
+ netdev_err(dev, "unable to boot sleep image\n");
+ goto need_resume;
+ }
+
+ INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_SET_MAC_ADDRESS);
+ xp_cmd.parm1 = cpu_to_le16(ntohs(*(__be16 *)&dev->dev_addr[0]));
+ xp_cmd.parm2 = cpu_to_le32(ntohl(*(__be32 *)&dev->dev_addr[2]));
+ if(typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL) < 0) {
+ netdev_err(dev, "unable to set mac address in suspend\n");
+ goto need_resume;
+ }
+
+ INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_SET_RX_FILTER);
+ xp_cmd.parm1 = TYPHOON_RX_FILTER_DIRECTED | TYPHOON_RX_FILTER_BROADCAST;
+ if(typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL) < 0) {
+ netdev_err(dev, "unable to set rx filter in suspend\n");
+ goto need_resume;
+ }
+
+ if(typhoon_sleep(tp, pci_choose_state(pdev, state), tp->wol_events) < 0) {
+ netdev_err(dev, "unable to put card to sleep\n");
+ goto need_resume;
+ }
+
+ return 0;
+
+need_resume:
+ typhoon_resume(pdev);
+ return -EBUSY;
+}
+#endif
+
+static int __devinit
+typhoon_test_mmio(struct pci_dev *pdev)
+{
+ void __iomem *ioaddr = pci_iomap(pdev, 1, 128);
+ int mode = 0;
+ u32 val;
+
+ if(!ioaddr)
+ goto out;
+
+ if(ioread32(ioaddr + TYPHOON_REG_STATUS) !=
+ TYPHOON_STATUS_WAITING_FOR_HOST)
+ goto out_unmap;
+
+ iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_MASK);
+ iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_STATUS);
+ iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_ENABLE);
+
+ /* Ok, see if we can change our interrupt status register by
+ * sending ourselves an interrupt. If so, then MMIO works.
+ * The 50usec delay is arbitrary -- it could probably be smaller.
+ */
+ val = ioread32(ioaddr + TYPHOON_REG_INTR_STATUS);
+ if((val & TYPHOON_INTR_SELF) == 0) {
+ iowrite32(1, ioaddr + TYPHOON_REG_SELF_INTERRUPT);
+ ioread32(ioaddr + TYPHOON_REG_INTR_STATUS);
+ udelay(50);
+ val = ioread32(ioaddr + TYPHOON_REG_INTR_STATUS);
+ if(val & TYPHOON_INTR_SELF)
+ mode = 1;
+ }
+
+ iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_MASK);
+ iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_STATUS);
+ iowrite32(TYPHOON_INTR_NONE, ioaddr + TYPHOON_REG_INTR_ENABLE);
+ ioread32(ioaddr + TYPHOON_REG_INTR_STATUS);
+
+out_unmap:
+ pci_iounmap(pdev, ioaddr);
+
+out:
+ if(!mode)
+ pr_info("%s: falling back to port IO\n", pci_name(pdev));
+ return mode;
+}
+
+static const struct net_device_ops typhoon_netdev_ops = {
+ .ndo_open = typhoon_open,
+ .ndo_stop = typhoon_close,
+ .ndo_start_xmit = typhoon_start_tx,
+ .ndo_set_multicast_list = typhoon_set_rx_mode,
+ .ndo_tx_timeout = typhoon_tx_timeout,
+ .ndo_get_stats = typhoon_get_stats,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_set_mac_address = typhoon_set_mac_address,
+ .ndo_change_mtu = eth_change_mtu,
+};
+
+static int __devinit
+typhoon_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ struct net_device *dev;
+ struct typhoon *tp;
+ int card_id = (int) ent->driver_data;
+ void __iomem *ioaddr;
+ void *shared;
+ dma_addr_t shared_dma;
+ struct cmd_desc xp_cmd;
+ struct resp_desc xp_resp[3];
+ int err = 0;
+ const char *err_msg;
+
+ dev = alloc_etherdev(sizeof(*tp));
+ if(dev == NULL) {
+ err_msg = "unable to alloc new net device";
+ err = -ENOMEM;
+ goto error_out;
+ }
+ SET_NETDEV_DEV(dev, &pdev->dev);
+
+ err = pci_enable_device(pdev);
+ if(err < 0) {
+ err_msg = "unable to enable device";
+ goto error_out_dev;
+ }
+
+ err = pci_set_mwi(pdev);
+ if(err < 0) {
+ err_msg = "unable to set MWI";
+ goto error_out_disable;
+ }
+
+ err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
+ if(err < 0) {
+ err_msg = "No usable DMA configuration";
+ goto error_out_mwi;
+ }
+
+ /* sanity checks on IO and MMIO BARs
+ */
+ if(!(pci_resource_flags(pdev, 0) & IORESOURCE_IO)) {
+ err_msg = "region #1 not a PCI IO resource, aborting";
+ err = -ENODEV;
+ goto error_out_mwi;
+ }
+ if(pci_resource_len(pdev, 0) < 128) {
+ err_msg = "Invalid PCI IO region size, aborting";
+ err = -ENODEV;
+ goto error_out_mwi;
+ }
+ if(!(pci_resource_flags(pdev, 1) & IORESOURCE_MEM)) {
+ err_msg = "region #1 not a PCI MMIO resource, aborting";
+ err = -ENODEV;
+ goto error_out_mwi;
+ }
+ if(pci_resource_len(pdev, 1) < 128) {
+ err_msg = "Invalid PCI MMIO region size, aborting";
+ err = -ENODEV;
+ goto error_out_mwi;
+ }
+
+ err = pci_request_regions(pdev, KBUILD_MODNAME);
+ if(err < 0) {
+ err_msg = "could not request regions";
+ goto error_out_mwi;
+ }
+
+ /* map our registers
+ */
+ if(use_mmio != 0 && use_mmio != 1)
+ use_mmio = typhoon_test_mmio(pdev);
+
+ ioaddr = pci_iomap(pdev, use_mmio, 128);
+ if (!ioaddr) {
+ err_msg = "cannot remap registers, aborting";
+ err = -EIO;
+ goto error_out_regions;
+ }
+
+ /* allocate pci dma space for rx and tx descriptor rings
+ */
+ shared = pci_alloc_consistent(pdev, sizeof(struct typhoon_shared),
+ &shared_dma);
+ if(!shared) {
+ err_msg = "could not allocate DMA memory";
+ err = -ENOMEM;
+ goto error_out_remap;
+ }
+
+ dev->irq = pdev->irq;
+ tp = netdev_priv(dev);
+ tp->shared = shared;
+ tp->shared_dma = shared_dma;
+ tp->pdev = pdev;
+ tp->tx_pdev = pdev;
+ tp->ioaddr = ioaddr;
+ tp->tx_ioaddr = ioaddr;
+ tp->dev = dev;
+
+ /* Init sequence:
+ * 1) Reset the adapter to clear any bad juju
+ * 2) Reload the sleep image
+ * 3) Boot the sleep image
+ * 4) Get the hardware address.
+ * 5) Put the card to sleep.
+ */
+ if (typhoon_reset(ioaddr, WaitSleep) < 0) {
+ err_msg = "could not reset 3XP";
+ err = -EIO;
+ goto error_out_dma;
+ }
+
+ /* Now that we've reset the 3XP and are sure it's not going to
+ * write all over memory, enable bus mastering, and save our
+ * state for resuming after a suspend.
+ */
+ pci_set_master(pdev);
+ pci_save_state(pdev);
+
+ typhoon_init_interface(tp);
+ typhoon_init_rings(tp);
+
+ if(typhoon_boot_3XP(tp, TYPHOON_STATUS_WAITING_FOR_HOST) < 0) {
+ err_msg = "cannot boot 3XP sleep image";
+ err = -EIO;
+ goto error_out_reset;
+ }
+
+ INIT_COMMAND_WITH_RESPONSE(&xp_cmd, TYPHOON_CMD_READ_MAC_ADDRESS);
+ if(typhoon_issue_command(tp, 1, &xp_cmd, 1, xp_resp) < 0) {
+ err_msg = "cannot read MAC address";
+ err = -EIO;
+ goto error_out_reset;
+ }
+
+ *(__be16 *)&dev->dev_addr[0] = htons(le16_to_cpu(xp_resp[0].parm1));
+ *(__be32 *)&dev->dev_addr[2] = htonl(le32_to_cpu(xp_resp[0].parm2));
+
+ if(!is_valid_ether_addr(dev->dev_addr)) {
+ err_msg = "Could not obtain valid ethernet address, aborting";
+ goto error_out_reset;
+ }
+
+ /* Read the Sleep Image version last, so the response is valid
+ * later when we print out the version reported.
+ */
+ INIT_COMMAND_WITH_RESPONSE(&xp_cmd, TYPHOON_CMD_READ_VERSIONS);
+ if(typhoon_issue_command(tp, 1, &xp_cmd, 3, xp_resp) < 0) {
+ err_msg = "Could not get Sleep Image version";
+ goto error_out_reset;
+ }
+
+ tp->capabilities = typhoon_card_info[card_id].capabilities;
+ tp->xcvr_select = TYPHOON_XCVR_AUTONEG;
+
+ /* Typhoon 1.0 Sleep Images return one response descriptor to the
+ * READ_VERSIONS command. Those versions are OK after waking up
+ * from sleep without needing a reset. Typhoon 1.1+ Sleep Images
+ * seem to need a little extra help to get started. Since we don't
+ * know how to nudge it along, just kick it.
+ */
+ if(xp_resp[0].numDesc != 0)
+ tp->capabilities |= TYPHOON_WAKEUP_NEEDS_RESET;
+
+ if(typhoon_sleep(tp, PCI_D3hot, 0) < 0) {
+ err_msg = "cannot put adapter to sleep";
+ err = -EIO;
+ goto error_out_reset;
+ }
+
+ /* The chip-specific entries in the device structure. */
+ dev->netdev_ops = &typhoon_netdev_ops;
+ netif_napi_add(dev, &tp->napi, typhoon_poll, 16);
+ dev->watchdog_timeo = TX_TIMEOUT;
+
+ SET_ETHTOOL_OPS(dev, &typhoon_ethtool_ops);
+
+ /* We can handle scatter gather, up to 16 entries, and
+ * we can do IP checksumming (only version 4, doh...)
+ *
+ * There's no way to turn off the RX VLAN offloading and stripping
+ * on the current 3XP firmware -- it does not respect the offload
+ * settings -- so we only allow the user to toggle the TX processing.
+ */
+ dev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO |
+ NETIF_F_HW_VLAN_TX;
+ dev->features = dev->hw_features |
+ NETIF_F_HW_VLAN_RX | NETIF_F_RXCSUM;
+
+ if(register_netdev(dev) < 0) {
+ err_msg = "unable to register netdev";
+ goto error_out_reset;
+ }
+
+ pci_set_drvdata(pdev, dev);
+
+ netdev_info(dev, "%s at %s 0x%llx, %pM\n",
+ typhoon_card_info[card_id].name,
+ use_mmio ? "MMIO" : "IO",
+ (unsigned long long)pci_resource_start(pdev, use_mmio),
+ dev->dev_addr);
+
+ /* xp_resp still contains the response to the READ_VERSIONS command.
+ * For debugging, let the user know what version he has.
+ */
+ if(xp_resp[0].numDesc == 0) {
+ /* This is the Typhoon 1.0 type Sleep Image, last 16 bits
+ * of version is Month/Day of build.
+ */
+ u16 monthday = le32_to_cpu(xp_resp[0].parm2) & 0xffff;
+ netdev_info(dev, "Typhoon 1.0 Sleep Image built %02u/%02u/2000\n",
+ monthday >> 8, monthday & 0xff);
+ } else if(xp_resp[0].numDesc == 2) {
+ /* This is the Typhoon 1.1+ type Sleep Image
+ */
+ u32 sleep_ver = le32_to_cpu(xp_resp[0].parm2);
+ u8 *ver_string = (u8 *) &xp_resp[1];
+ ver_string[25] = 0;
+ netdev_info(dev, "Typhoon 1.1+ Sleep Image version %02x.%03x.%03x %s\n",
+ sleep_ver >> 24, (sleep_ver >> 12) & 0xfff,
+ sleep_ver & 0xfff, ver_string);
+ } else {
+ netdev_warn(dev, "Unknown Sleep Image version (%u:%04x)\n",
+ xp_resp[0].numDesc, le32_to_cpu(xp_resp[0].parm2));
+ }
+
+ return 0;
+
+error_out_reset:
+ typhoon_reset(ioaddr, NoWait);
+
+error_out_dma:
+ pci_free_consistent(pdev, sizeof(struct typhoon_shared),
+ shared, shared_dma);
+error_out_remap:
+ pci_iounmap(pdev, ioaddr);
+error_out_regions:
+ pci_release_regions(pdev);
+error_out_mwi:
+ pci_clear_mwi(pdev);
+error_out_disable:
+ pci_disable_device(pdev);
+error_out_dev:
+ free_netdev(dev);
+error_out:
+ pr_err("%s: %s\n", pci_name(pdev), err_msg);
+ return err;
+}
+
+static void __devexit
+typhoon_remove_one(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct typhoon *tp = netdev_priv(dev);
+
+ unregister_netdev(dev);
+ pci_set_power_state(pdev, PCI_D0);
+ pci_restore_state(pdev);
+ typhoon_reset(tp->ioaddr, NoWait);
+ pci_iounmap(pdev, tp->ioaddr);
+ pci_free_consistent(pdev, sizeof(struct typhoon_shared),
+ tp->shared, tp->shared_dma);
+ pci_release_regions(pdev);
+ pci_clear_mwi(pdev);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+ free_netdev(dev);
+}
+
+static struct pci_driver typhoon_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = typhoon_pci_tbl,
+ .probe = typhoon_init_one,
+ .remove = __devexit_p(typhoon_remove_one),
+#ifdef CONFIG_PM
+ .suspend = typhoon_suspend,
+ .resume = typhoon_resume,
+#endif
+};
+
+static int __init
+typhoon_init(void)
+{
+ return pci_register_driver(&typhoon_driver);
+}
+
+static void __exit
+typhoon_cleanup(void)
+{
+ if (typhoon_fw)
+ release_firmware(typhoon_fw);
+ pci_unregister_driver(&typhoon_driver);
+}
+
+module_init(typhoon_init);
+module_exit(typhoon_cleanup);
--- /dev/null
+/* typhoon.h: chip info for the 3Com 3CR990 family of controllers */
+/*
+ Written 2002-2003 by David Dillow <dave@thedillows.org>
+
+ This software may be used and distributed according to the terms of
+ the GNU General Public License (GPL), incorporated herein by reference.
+ Drivers based on or derived from this code fall under the GPL and must
+ retain the authorship, copyright and license notice. This file is not
+ a complete program and may only be used when the entire operating
+ system is licensed under the GPL.
+
+ This software is available on a public web site. It may enable
+ cryptographic capabilities of the 3Com hardware, and may be
+ exported from the United States under License Exception "TSU"
+ pursuant to 15 C.F.R. Section 740.13(e).
+
+ This work was funded by the National Library of Medicine under
+ the Department of Energy project number 0274DD06D1 and NLM project
+ number Y1-LM-2015-01.
+*/
+
+/* All Typhoon ring positions are specificed in bytes, and point to the
+ * first "clean" entry in the ring -- ie the next entry we use for whatever
+ * purpose.
+ */
+
+/* The Typhoon basic ring
+ * ringBase: where this ring lives (our virtual address)
+ * lastWrite: the next entry we'll use
+ */
+struct basic_ring {
+ u8 *ringBase;
+ u32 lastWrite;
+};
+
+/* The Typoon transmit ring -- same as a basic ring, plus:
+ * lastRead: where we're at in regard to cleaning up the ring
+ * writeRegister: register to use for writing (different for Hi & Lo rings)
+ */
+struct transmit_ring {
+ u8 *ringBase;
+ u32 lastWrite;
+ u32 lastRead;
+ int writeRegister;
+};
+
+/* The host<->Typhoon ring index structure
+ * This indicates the current positions in the rings
+ *
+ * All values must be in little endian format for the 3XP
+ *
+ * rxHiCleared: entry we've cleared to in the Hi receive ring
+ * rxLoCleared: entry we've cleared to in the Lo receive ring
+ * rxBuffReady: next entry we'll put a free buffer in
+ * respCleared: entry we've cleared to in the response ring
+ *
+ * txLoCleared: entry the NIC has cleared to in the Lo transmit ring
+ * txHiCleared: entry the NIC has cleared to in the Hi transmit ring
+ * rxLoReady: entry the NIC has filled to in the Lo receive ring
+ * rxBuffCleared: entry the NIC has cleared in the free buffer ring
+ * cmdCleared: entry the NIC has cleared in the command ring
+ * respReady: entry the NIC has filled to in the response ring
+ * rxHiReady: entry the NIC has filled to in the Hi receive ring
+ */
+struct typhoon_indexes {
+ /* The first four are written by the host, and read by the NIC */
+ volatile __le32 rxHiCleared;
+ volatile __le32 rxLoCleared;
+ volatile __le32 rxBuffReady;
+ volatile __le32 respCleared;
+
+ /* The remaining are written by the NIC, and read by the host */
+ volatile __le32 txLoCleared;
+ volatile __le32 txHiCleared;
+ volatile __le32 rxLoReady;
+ volatile __le32 rxBuffCleared;
+ volatile __le32 cmdCleared;
+ volatile __le32 respReady;
+ volatile __le32 rxHiReady;
+} __packed;
+
+/* The host<->Typhoon interface
+ * Our means of communicating where things are
+ *
+ * All values must be in little endian format for the 3XP
+ *
+ * ringIndex: 64 bit bus address of the index structure
+ * txLoAddr: 64 bit bus address of the Lo transmit ring
+ * txLoSize: size (in bytes) of the Lo transmit ring
+ * txHi*: as above for the Hi priority transmit ring
+ * rxLo*: as above for the Lo priority receive ring
+ * rxBuff*: as above for the free buffer ring
+ * cmd*: as above for the command ring
+ * resp*: as above for the response ring
+ * zeroAddr: 64 bit bus address of a zero word (for DMA)
+ * rxHi*: as above for the Hi Priority receive ring
+ *
+ * While there is room for 64 bit addresses, current versions of the 3XP
+ * only do 32 bit addresses, so the *Hi for each of the above will always
+ * be zero.
+ */
+struct typhoon_interface {
+ __le32 ringIndex;
+ __le32 ringIndexHi;
+ __le32 txLoAddr;
+ __le32 txLoAddrHi;
+ __le32 txLoSize;
+ __le32 txHiAddr;
+ __le32 txHiAddrHi;
+ __le32 txHiSize;
+ __le32 rxLoAddr;
+ __le32 rxLoAddrHi;
+ __le32 rxLoSize;
+ __le32 rxBuffAddr;
+ __le32 rxBuffAddrHi;
+ __le32 rxBuffSize;
+ __le32 cmdAddr;
+ __le32 cmdAddrHi;
+ __le32 cmdSize;
+ __le32 respAddr;
+ __le32 respAddrHi;
+ __le32 respSize;
+ __le32 zeroAddr;
+ __le32 zeroAddrHi;
+ __le32 rxHiAddr;
+ __le32 rxHiAddrHi;
+ __le32 rxHiSize;
+} __packed;
+
+/* The Typhoon transmit/fragment descriptor
+ *
+ * A packet is described by a packet descriptor, followed by option descriptors,
+ * if any, then one or more fragment descriptors.
+ *
+ * Packet descriptor:
+ * flags: Descriptor type
+ * len:i zero, or length of this packet
+ * addr*: 8 bytes of opaque data to the firmware -- for skb pointer
+ * processFlags: Determine offload tasks to perform on this packet.
+ *
+ * Fragment descriptor:
+ * flags: Descriptor type
+ * len:i length of this fragment
+ * addr: low bytes of DMA address for this part of the packet
+ * addrHi: hi bytes of DMA address for this part of the packet
+ * processFlags: must be zero
+ *
+ * TYPHOON_DESC_VALID is not mentioned in their docs, but their Linux
+ * driver uses it.
+ */
+struct tx_desc {
+ u8 flags;
+#define TYPHOON_TYPE_MASK 0x07
+#define TYPHOON_FRAG_DESC 0x00
+#define TYPHOON_TX_DESC 0x01
+#define TYPHOON_CMD_DESC 0x02
+#define TYPHOON_OPT_DESC 0x03
+#define TYPHOON_RX_DESC 0x04
+#define TYPHOON_RESP_DESC 0x05
+#define TYPHOON_OPT_TYPE_MASK 0xf0
+#define TYPHOON_OPT_IPSEC 0x00
+#define TYPHOON_OPT_TCP_SEG 0x10
+#define TYPHOON_CMD_RESPOND 0x40
+#define TYPHOON_RESP_ERROR 0x40
+#define TYPHOON_RX_ERROR 0x40
+#define TYPHOON_DESC_VALID 0x80
+ u8 numDesc;
+ __le16 len;
+ union {
+ struct {
+ __le32 addr;
+ __le32 addrHi;
+ } frag;
+ u64 tx_addr; /* opaque for hardware, for TX_DESC */
+ };
+ __le32 processFlags;
+#define TYPHOON_TX_PF_NO_CRC cpu_to_le32(0x00000001)
+#define TYPHOON_TX_PF_IP_CHKSUM cpu_to_le32(0x00000002)
+#define TYPHOON_TX_PF_TCP_CHKSUM cpu_to_le32(0x00000004)
+#define TYPHOON_TX_PF_TCP_SEGMENT cpu_to_le32(0x00000008)
+#define TYPHOON_TX_PF_INSERT_VLAN cpu_to_le32(0x00000010)
+#define TYPHOON_TX_PF_IPSEC cpu_to_le32(0x00000020)
+#define TYPHOON_TX_PF_VLAN_PRIORITY cpu_to_le32(0x00000040)
+#define TYPHOON_TX_PF_UDP_CHKSUM cpu_to_le32(0x00000080)
+#define TYPHOON_TX_PF_PAD_FRAME cpu_to_le32(0x00000100)
+#define TYPHOON_TX_PF_RESERVED cpu_to_le32(0x00000e00)
+#define TYPHOON_TX_PF_VLAN_MASK cpu_to_le32(0x0ffff000)
+#define TYPHOON_TX_PF_INTERNAL cpu_to_le32(0xf0000000)
+#define TYPHOON_TX_PF_VLAN_TAG_SHIFT 12
+} __packed;
+
+/* The TCP Segmentation offload option descriptor
+ *
+ * flags: descriptor type
+ * numDesc: must be 1
+ * mss_flags: bits 0-11 (little endian) are MSS, 12 is first TSO descriptor
+ * 13 is list TSO descriptor, set both if only one TSO
+ * respAddrLo: low bytes of address of the bytesTx field of this descriptor
+ * bytesTx: total number of bytes in this TSO request
+ * status: 0 on completion
+ */
+struct tcpopt_desc {
+ u8 flags;
+ u8 numDesc;
+ __le16 mss_flags;
+#define TYPHOON_TSO_FIRST cpu_to_le16(0x1000)
+#define TYPHOON_TSO_LAST cpu_to_le16(0x2000)
+ __le32 respAddrLo;
+ __le32 bytesTx;
+ __le32 status;
+} __packed;
+
+/* The IPSEC Offload descriptor
+ *
+ * flags: descriptor type
+ * numDesc: must be 1
+ * ipsecFlags: bit 0: 0 -- generate IV, 1 -- use supplied IV
+ * sa1, sa2: Security Association IDs for this packet
+ * reserved: set to 0
+ */
+struct ipsec_desc {
+ u8 flags;
+ u8 numDesc;
+ __le16 ipsecFlags;
+#define TYPHOON_IPSEC_GEN_IV cpu_to_le16(0x0000)
+#define TYPHOON_IPSEC_USE_IV cpu_to_le16(0x0001)
+ __le32 sa1;
+ __le32 sa2;
+ __le32 reserved;
+} __packed;
+
+/* The Typhoon receive descriptor (Updated by NIC)
+ *
+ * flags: Descriptor type, error indication
+ * numDesc: Always zero
+ * frameLen: the size of the packet received
+ * addr: low 32 bytes of the virtual addr passed in for this buffer
+ * addrHi: high 32 bytes of the virtual addr passed in for this buffer
+ * rxStatus: Error if set in flags, otherwise result of offload processing
+ * filterResults: results of filtering on packet, not used
+ * ipsecResults: Results of IPSEC processing
+ * vlanTag: the 801.2q TCI from the packet
+ */
+struct rx_desc {
+ u8 flags;
+ u8 numDesc;
+ __le16 frameLen;
+ u32 addr; /* opaque, comes from virtAddr */
+ u32 addrHi; /* opaque, comes from virtAddrHi */
+ __le32 rxStatus;
+#define TYPHOON_RX_ERR_INTERNAL cpu_to_le32(0x00000000)
+#define TYPHOON_RX_ERR_FIFO_UNDERRUN cpu_to_le32(0x00000001)
+#define TYPHOON_RX_ERR_BAD_SSD cpu_to_le32(0x00000002)
+#define TYPHOON_RX_ERR_RUNT cpu_to_le32(0x00000003)
+#define TYPHOON_RX_ERR_CRC cpu_to_le32(0x00000004)
+#define TYPHOON_RX_ERR_OVERSIZE cpu_to_le32(0x00000005)
+#define TYPHOON_RX_ERR_ALIGN cpu_to_le32(0x00000006)
+#define TYPHOON_RX_ERR_DRIBBLE cpu_to_le32(0x00000007)
+#define TYPHOON_RX_PROTO_MASK cpu_to_le32(0x00000003)
+#define TYPHOON_RX_PROTO_UNKNOWN cpu_to_le32(0x00000000)
+#define TYPHOON_RX_PROTO_IP cpu_to_le32(0x00000001)
+#define TYPHOON_RX_PROTO_IPX cpu_to_le32(0x00000002)
+#define TYPHOON_RX_VLAN cpu_to_le32(0x00000004)
+#define TYPHOON_RX_IP_FRAG cpu_to_le32(0x00000008)
+#define TYPHOON_RX_IPSEC cpu_to_le32(0x00000010)
+#define TYPHOON_RX_IP_CHK_FAIL cpu_to_le32(0x00000020)
+#define TYPHOON_RX_TCP_CHK_FAIL cpu_to_le32(0x00000040)
+#define TYPHOON_RX_UDP_CHK_FAIL cpu_to_le32(0x00000080)
+#define TYPHOON_RX_IP_CHK_GOOD cpu_to_le32(0x00000100)
+#define TYPHOON_RX_TCP_CHK_GOOD cpu_to_le32(0x00000200)
+#define TYPHOON_RX_UDP_CHK_GOOD cpu_to_le32(0x00000400)
+ __le16 filterResults;
+#define TYPHOON_RX_FILTER_MASK cpu_to_le16(0x7fff)
+#define TYPHOON_RX_FILTERED cpu_to_le16(0x8000)
+ __le16 ipsecResults;
+#define TYPHOON_RX_OUTER_AH_GOOD cpu_to_le16(0x0001)
+#define TYPHOON_RX_OUTER_ESP_GOOD cpu_to_le16(0x0002)
+#define TYPHOON_RX_INNER_AH_GOOD cpu_to_le16(0x0004)
+#define TYPHOON_RX_INNER_ESP_GOOD cpu_to_le16(0x0008)
+#define TYPHOON_RX_OUTER_AH_FAIL cpu_to_le16(0x0010)
+#define TYPHOON_RX_OUTER_ESP_FAIL cpu_to_le16(0x0020)
+#define TYPHOON_RX_INNER_AH_FAIL cpu_to_le16(0x0040)
+#define TYPHOON_RX_INNER_ESP_FAIL cpu_to_le16(0x0080)
+#define TYPHOON_RX_UNKNOWN_SA cpu_to_le16(0x0100)
+#define TYPHOON_RX_ESP_FORMAT_ERR cpu_to_le16(0x0200)
+ __be32 vlanTag;
+} __packed;
+
+/* The Typhoon free buffer descriptor, used to give a buffer to the NIC
+ *
+ * physAddr: low 32 bits of the bus address of the buffer
+ * physAddrHi: high 32 bits of the bus address of the buffer, always zero
+ * virtAddr: low 32 bits of the skb address
+ * virtAddrHi: high 32 bits of the skb address, always zero
+ *
+ * the virt* address is basically two 32 bit cookies, just passed back
+ * from the NIC
+ */
+struct rx_free {
+ __le32 physAddr;
+ __le32 physAddrHi;
+ u32 virtAddr;
+ u32 virtAddrHi;
+} __packed;
+
+/* The Typhoon command descriptor, used for commands and responses
+ *
+ * flags: descriptor type
+ * numDesc: number of descriptors following in this command/response,
+ * ie, zero for a one descriptor command
+ * cmd: the command
+ * seqNo: sequence number (unused)
+ * parm1: use varies by command
+ * parm2: use varies by command
+ * parm3: use varies by command
+ */
+struct cmd_desc {
+ u8 flags;
+ u8 numDesc;
+ __le16 cmd;
+#define TYPHOON_CMD_TX_ENABLE cpu_to_le16(0x0001)
+#define TYPHOON_CMD_TX_DISABLE cpu_to_le16(0x0002)
+#define TYPHOON_CMD_RX_ENABLE cpu_to_le16(0x0003)
+#define TYPHOON_CMD_RX_DISABLE cpu_to_le16(0x0004)
+#define TYPHOON_CMD_SET_RX_FILTER cpu_to_le16(0x0005)
+#define TYPHOON_CMD_READ_STATS cpu_to_le16(0x0007)
+#define TYPHOON_CMD_XCVR_SELECT cpu_to_le16(0x0013)
+#define TYPHOON_CMD_SET_MAX_PKT_SIZE cpu_to_le16(0x001a)
+#define TYPHOON_CMD_READ_MEDIA_STATUS cpu_to_le16(0x001b)
+#define TYPHOON_CMD_GOTO_SLEEP cpu_to_le16(0x0023)
+#define TYPHOON_CMD_SET_MULTICAST_HASH cpu_to_le16(0x0025)
+#define TYPHOON_CMD_SET_MAC_ADDRESS cpu_to_le16(0x0026)
+#define TYPHOON_CMD_READ_MAC_ADDRESS cpu_to_le16(0x0027)
+#define TYPHOON_CMD_VLAN_TYPE_WRITE cpu_to_le16(0x002b)
+#define TYPHOON_CMD_CREATE_SA cpu_to_le16(0x0034)
+#define TYPHOON_CMD_DELETE_SA cpu_to_le16(0x0035)
+#define TYPHOON_CMD_READ_VERSIONS cpu_to_le16(0x0043)
+#define TYPHOON_CMD_IRQ_COALESCE_CTRL cpu_to_le16(0x0045)
+#define TYPHOON_CMD_ENABLE_WAKE_EVENTS cpu_to_le16(0x0049)
+#define TYPHOON_CMD_SET_OFFLOAD_TASKS cpu_to_le16(0x004f)
+#define TYPHOON_CMD_HELLO_RESP cpu_to_le16(0x0057)
+#define TYPHOON_CMD_HALT cpu_to_le16(0x005d)
+#define TYPHOON_CMD_READ_IPSEC_INFO cpu_to_le16(0x005e)
+#define TYPHOON_CMD_GET_IPSEC_ENABLE cpu_to_le16(0x0067)
+#define TYPHOON_CMD_GET_CMD_LVL cpu_to_le16(0x0069)
+ u16 seqNo;
+ __le16 parm1;
+ __le32 parm2;
+ __le32 parm3;
+} __packed;
+
+/* The Typhoon response descriptor, see command descriptor for details
+ */
+struct resp_desc {
+ u8 flags;
+ u8 numDesc;
+ __le16 cmd;
+ __le16 seqNo;
+ __le16 parm1;
+ __le32 parm2;
+ __le32 parm3;
+} __packed;
+
+#define INIT_COMMAND_NO_RESPONSE(x, command) \
+ do { struct cmd_desc *_ptr = (x); \
+ memset(_ptr, 0, sizeof(struct cmd_desc)); \
+ _ptr->flags = TYPHOON_CMD_DESC | TYPHOON_DESC_VALID; \
+ _ptr->cmd = command; \
+ } while(0)
+
+/* We set seqNo to 1 if we're expecting a response from this command */
+#define INIT_COMMAND_WITH_RESPONSE(x, command) \
+ do { struct cmd_desc *_ptr = (x); \
+ memset(_ptr, 0, sizeof(struct cmd_desc)); \
+ _ptr->flags = TYPHOON_CMD_RESPOND | TYPHOON_CMD_DESC; \
+ _ptr->flags |= TYPHOON_DESC_VALID; \
+ _ptr->cmd = command; \
+ _ptr->seqNo = 1; \
+ } while(0)
+
+/* TYPHOON_CMD_SET_RX_FILTER filter bits (cmd.parm1)
+ */
+#define TYPHOON_RX_FILTER_DIRECTED cpu_to_le16(0x0001)
+#define TYPHOON_RX_FILTER_ALL_MCAST cpu_to_le16(0x0002)
+#define TYPHOON_RX_FILTER_BROADCAST cpu_to_le16(0x0004)
+#define TYPHOON_RX_FILTER_PROMISCOUS cpu_to_le16(0x0008)
+#define TYPHOON_RX_FILTER_MCAST_HASH cpu_to_le16(0x0010)
+
+/* TYPHOON_CMD_READ_STATS response format
+ */
+struct stats_resp {
+ u8 flags;
+ u8 numDesc;
+ __le16 cmd;
+ __le16 seqNo;
+ __le16 unused;
+ __le32 txPackets;
+ __le64 txBytes;
+ __le32 txDeferred;
+ __le32 txLateCollisions;
+ __le32 txCollisions;
+ __le32 txCarrierLost;
+ __le32 txMultipleCollisions;
+ __le32 txExcessiveCollisions;
+ __le32 txFifoUnderruns;
+ __le32 txMulticastTxOverflows;
+ __le32 txFiltered;
+ __le32 rxPacketsGood;
+ __le64 rxBytesGood;
+ __le32 rxFifoOverruns;
+ __le32 BadSSD;
+ __le32 rxCrcErrors;
+ __le32 rxOversized;
+ __le32 rxBroadcast;
+ __le32 rxMulticast;
+ __le32 rxOverflow;
+ __le32 rxFiltered;
+ __le32 linkStatus;
+#define TYPHOON_LINK_STAT_MASK cpu_to_le32(0x00000001)
+#define TYPHOON_LINK_GOOD cpu_to_le32(0x00000001)
+#define TYPHOON_LINK_BAD cpu_to_le32(0x00000000)
+#define TYPHOON_LINK_SPEED_MASK cpu_to_le32(0x00000002)
+#define TYPHOON_LINK_100MBPS cpu_to_le32(0x00000002)
+#define TYPHOON_LINK_10MBPS cpu_to_le32(0x00000000)
+#define TYPHOON_LINK_DUPLEX_MASK cpu_to_le32(0x00000004)
+#define TYPHOON_LINK_FULL_DUPLEX cpu_to_le32(0x00000004)
+#define TYPHOON_LINK_HALF_DUPLEX cpu_to_le32(0x00000000)
+ __le32 unused2;
+ __le32 unused3;
+} __packed;
+
+/* TYPHOON_CMD_XCVR_SELECT xcvr values (resp.parm1)
+ */
+#define TYPHOON_XCVR_10HALF cpu_to_le16(0x0000)
+#define TYPHOON_XCVR_10FULL cpu_to_le16(0x0001)
+#define TYPHOON_XCVR_100HALF cpu_to_le16(0x0002)
+#define TYPHOON_XCVR_100FULL cpu_to_le16(0x0003)
+#define TYPHOON_XCVR_AUTONEG cpu_to_le16(0x0004)
+
+/* TYPHOON_CMD_READ_MEDIA_STATUS (resp.parm1)
+ */
+#define TYPHOON_MEDIA_STAT_CRC_STRIP_DISABLE cpu_to_le16(0x0004)
+#define TYPHOON_MEDIA_STAT_COLLISION_DETECT cpu_to_le16(0x0010)
+#define TYPHOON_MEDIA_STAT_CARRIER_SENSE cpu_to_le16(0x0020)
+#define TYPHOON_MEDIA_STAT_POLARITY_REV cpu_to_le16(0x0400)
+#define TYPHOON_MEDIA_STAT_NO_LINK cpu_to_le16(0x0800)
+
+/* TYPHOON_CMD_SET_MULTICAST_HASH enable values (cmd.parm1)
+ */
+#define TYPHOON_MCAST_HASH_DISABLE cpu_to_le16(0x0000)
+#define TYPHOON_MCAST_HASH_ENABLE cpu_to_le16(0x0001)
+#define TYPHOON_MCAST_HASH_SET cpu_to_le16(0x0002)
+
+/* TYPHOON_CMD_CREATE_SA descriptor and settings
+ */
+struct sa_descriptor {
+ u8 flags;
+ u8 numDesc;
+ u16 cmd;
+ u16 seqNo;
+ u16 mode;
+#define TYPHOON_SA_MODE_NULL cpu_to_le16(0x0000)
+#define TYPHOON_SA_MODE_AH cpu_to_le16(0x0001)
+#define TYPHOON_SA_MODE_ESP cpu_to_le16(0x0002)
+ u8 hashFlags;
+#define TYPHOON_SA_HASH_ENABLE 0x01
+#define TYPHOON_SA_HASH_SHA1 0x02
+#define TYPHOON_SA_HASH_MD5 0x04
+ u8 direction;
+#define TYPHOON_SA_DIR_RX 0x00
+#define TYPHOON_SA_DIR_TX 0x01
+ u8 encryptionFlags;
+#define TYPHOON_SA_ENCRYPT_ENABLE 0x01
+#define TYPHOON_SA_ENCRYPT_DES 0x02
+#define TYPHOON_SA_ENCRYPT_3DES 0x00
+#define TYPHOON_SA_ENCRYPT_3DES_2KEY 0x00
+#define TYPHOON_SA_ENCRYPT_3DES_3KEY 0x04
+#define TYPHOON_SA_ENCRYPT_CBC 0x08
+#define TYPHOON_SA_ENCRYPT_ECB 0x00
+ u8 specifyIndex;
+#define TYPHOON_SA_SPECIFY_INDEX 0x01
+#define TYPHOON_SA_GENERATE_INDEX 0x00
+ u32 SPI;
+ u32 destAddr;
+ u32 destMask;
+ u8 integKey[20];
+ u8 confKey[24];
+ u32 index;
+ u32 unused;
+ u32 unused2;
+} __packed;
+
+/* TYPHOON_CMD_SET_OFFLOAD_TASKS bits (cmd.parm2 (Tx) & cmd.parm3 (Rx))
+ * This is all for IPv4.
+ */
+#define TYPHOON_OFFLOAD_TCP_CHKSUM cpu_to_le32(0x00000002)
+#define TYPHOON_OFFLOAD_UDP_CHKSUM cpu_to_le32(0x00000004)
+#define TYPHOON_OFFLOAD_IP_CHKSUM cpu_to_le32(0x00000008)
+#define TYPHOON_OFFLOAD_IPSEC cpu_to_le32(0x00000010)
+#define TYPHOON_OFFLOAD_BCAST_THROTTLE cpu_to_le32(0x00000020)
+#define TYPHOON_OFFLOAD_DHCP_PREVENT cpu_to_le32(0x00000040)
+#define TYPHOON_OFFLOAD_VLAN cpu_to_le32(0x00000080)
+#define TYPHOON_OFFLOAD_FILTERING cpu_to_le32(0x00000100)
+#define TYPHOON_OFFLOAD_TCP_SEGMENT cpu_to_le32(0x00000200)
+
+/* TYPHOON_CMD_ENABLE_WAKE_EVENTS bits (cmd.parm1)
+ */
+#define TYPHOON_WAKE_MAGIC_PKT cpu_to_le16(0x01)
+#define TYPHOON_WAKE_LINK_EVENT cpu_to_le16(0x02)
+#define TYPHOON_WAKE_ICMP_ECHO cpu_to_le16(0x04)
+#define TYPHOON_WAKE_ARP cpu_to_le16(0x08)
+
+/* These are used to load the firmware image on the NIC
+ */
+struct typhoon_file_header {
+ u8 tag[8];
+ __le32 version;
+ __le32 numSections;
+ __le32 startAddr;
+ __le32 hmacDigest[5];
+} __packed;
+
+struct typhoon_section_header {
+ __le32 len;
+ u16 checksum;
+ u16 reserved;
+ __le32 startAddr;
+} __packed;
+
+/* The Typhoon Register offsets
+ */
+#define TYPHOON_REG_SOFT_RESET 0x00
+#define TYPHOON_REG_INTR_STATUS 0x04
+#define TYPHOON_REG_INTR_ENABLE 0x08
+#define TYPHOON_REG_INTR_MASK 0x0c
+#define TYPHOON_REG_SELF_INTERRUPT 0x10
+#define TYPHOON_REG_HOST2ARM7 0x14
+#define TYPHOON_REG_HOST2ARM6 0x18
+#define TYPHOON_REG_HOST2ARM5 0x1c
+#define TYPHOON_REG_HOST2ARM4 0x20
+#define TYPHOON_REG_HOST2ARM3 0x24
+#define TYPHOON_REG_HOST2ARM2 0x28
+#define TYPHOON_REG_HOST2ARM1 0x2c
+#define TYPHOON_REG_HOST2ARM0 0x30
+#define TYPHOON_REG_ARM2HOST3 0x34
+#define TYPHOON_REG_ARM2HOST2 0x38
+#define TYPHOON_REG_ARM2HOST1 0x3c
+#define TYPHOON_REG_ARM2HOST0 0x40
+
+#define TYPHOON_REG_BOOT_DATA_LO TYPHOON_REG_HOST2ARM5
+#define TYPHOON_REG_BOOT_DATA_HI TYPHOON_REG_HOST2ARM4
+#define TYPHOON_REG_BOOT_DEST_ADDR TYPHOON_REG_HOST2ARM3
+#define TYPHOON_REG_BOOT_CHECKSUM TYPHOON_REG_HOST2ARM2
+#define TYPHOON_REG_BOOT_LENGTH TYPHOON_REG_HOST2ARM1
+
+#define TYPHOON_REG_DOWNLOAD_BOOT_ADDR TYPHOON_REG_HOST2ARM1
+#define TYPHOON_REG_DOWNLOAD_HMAC_0 TYPHOON_REG_HOST2ARM2
+#define TYPHOON_REG_DOWNLOAD_HMAC_1 TYPHOON_REG_HOST2ARM3
+#define TYPHOON_REG_DOWNLOAD_HMAC_2 TYPHOON_REG_HOST2ARM4
+#define TYPHOON_REG_DOWNLOAD_HMAC_3 TYPHOON_REG_HOST2ARM5
+#define TYPHOON_REG_DOWNLOAD_HMAC_4 TYPHOON_REG_HOST2ARM6
+
+#define TYPHOON_REG_BOOT_RECORD_ADDR_HI TYPHOON_REG_HOST2ARM2
+#define TYPHOON_REG_BOOT_RECORD_ADDR_LO TYPHOON_REG_HOST2ARM1
+
+#define TYPHOON_REG_TX_LO_READY TYPHOON_REG_HOST2ARM3
+#define TYPHOON_REG_CMD_READY TYPHOON_REG_HOST2ARM2
+#define TYPHOON_REG_TX_HI_READY TYPHOON_REG_HOST2ARM1
+
+#define TYPHOON_REG_COMMAND TYPHOON_REG_HOST2ARM0
+#define TYPHOON_REG_HEARTBEAT TYPHOON_REG_ARM2HOST3
+#define TYPHOON_REG_STATUS TYPHOON_REG_ARM2HOST0
+
+/* 3XP Reset values (TYPHOON_REG_SOFT_RESET)
+ */
+#define TYPHOON_RESET_ALL 0x7f
+#define TYPHOON_RESET_NONE 0x00
+
+/* 3XP irq bits (TYPHOON_REG_INTR{STATUS,ENABLE,MASK})
+ *
+ * Some of these came from OpenBSD, as the 3Com docs have it wrong
+ * (INTR_SELF) or don't list it at all (INTR_*_ABORT)
+ *
+ * Enabling irqs on the Heartbeat reg (ArmToHost3) gets you an irq
+ * about every 8ms, so don't do it.
+ */
+#define TYPHOON_INTR_HOST_INT 0x00000001
+#define TYPHOON_INTR_ARM2HOST0 0x00000002
+#define TYPHOON_INTR_ARM2HOST1 0x00000004
+#define TYPHOON_INTR_ARM2HOST2 0x00000008
+#define TYPHOON_INTR_ARM2HOST3 0x00000010
+#define TYPHOON_INTR_DMA0 0x00000020
+#define TYPHOON_INTR_DMA1 0x00000040
+#define TYPHOON_INTR_DMA2 0x00000080
+#define TYPHOON_INTR_DMA3 0x00000100
+#define TYPHOON_INTR_MASTER_ABORT 0x00000200
+#define TYPHOON_INTR_TARGET_ABORT 0x00000400
+#define TYPHOON_INTR_SELF 0x00000800
+#define TYPHOON_INTR_RESERVED 0xfffff000
+
+#define TYPHOON_INTR_BOOTCMD TYPHOON_INTR_ARM2HOST0
+
+#define TYPHOON_INTR_ENABLE_ALL 0xffffffef
+#define TYPHOON_INTR_ALL 0xffffffff
+#define TYPHOON_INTR_NONE 0x00000000
+
+/* The commands for the 3XP chip (TYPHOON_REG_COMMAND)
+ */
+#define TYPHOON_BOOTCMD_BOOT 0x00
+#define TYPHOON_BOOTCMD_WAKEUP 0xfa
+#define TYPHOON_BOOTCMD_DNLD_COMPLETE 0xfb
+#define TYPHOON_BOOTCMD_SEG_AVAILABLE 0xfc
+#define TYPHOON_BOOTCMD_RUNTIME_IMAGE 0xfd
+#define TYPHOON_BOOTCMD_REG_BOOT_RECORD 0xff
+
+/* 3XP Status values (TYPHOON_REG_STATUS)
+ */
+#define TYPHOON_STATUS_WAITING_FOR_BOOT 0x07
+#define TYPHOON_STATUS_SECOND_INIT 0x08
+#define TYPHOON_STATUS_RUNNING 0x09
+#define TYPHOON_STATUS_WAITING_FOR_HOST 0x0d
+#define TYPHOON_STATUS_WAITING_FOR_SEGMENT 0x10
+#define TYPHOON_STATUS_SLEEPING 0x11
+#define TYPHOON_STATUS_HALTED 0x14
if ETHERNET
+source "drivers/net/ethernet/3com/Kconfig"
+
endif # ETHERNET
#
# Makefile for the Linux network Ethernet device drivers.
#
+
+obj-$(CONFIG_NET_VENDOR_3COM) += 3com/
+++ /dev/null
-/* 3c574.c: A PCMCIA ethernet driver for the 3com 3c574 "RoadRunner".
-
- Written 1993-1998 by
- Donald Becker, becker@scyld.com, (driver core) and
- David Hinds, dahinds@users.sourceforge.net (from his PC card code).
- Locking fixes (C) Copyright 2003 Red Hat Inc
-
- This software may be used and distributed according to the terms of
- the GNU General Public License, incorporated herein by reference.
-
- This driver derives from Donald Becker's 3c509 core, which has the
- following copyright:
- Copyright 1993 United States Government as represented by the
- Director, National Security Agency.
-
-
-*/
-
-/*
- Theory of Operation
-
-I. Board Compatibility
-
-This device driver is designed for the 3Com 3c574 PC card Fast Ethernet
-Adapter.
-
-II. Board-specific settings
-
-None -- PC cards are autoconfigured.
-
-III. Driver operation
-
-The 3c574 uses a Boomerang-style interface, without the bus-master capability.
-See the Boomerang driver and documentation for most details.
-
-IV. Notes and chip documentation.
-
-Two added registers are used to enhance PIO performance, RunnerRdCtrl and
-RunnerWrCtrl. These are 11 bit down-counters that are preloaded with the
-count of word (16 bits) reads or writes the driver is about to do to the Rx
-or Tx FIFO. The chip is then able to hide the internal-PCI-bus to PC-card
-translation latency by buffering the I/O operations with an 8 word FIFO.
-Note: No other chip accesses are permitted when this buffer is used.
-
-A second enhancement is that both attribute and common memory space
-0x0800-0x0fff can translated to the PIO FIFO. Thus memory operations (faster
-with *some* PCcard bridges) may be used instead of I/O operations.
-This is enabled by setting the 0x10 bit in the PCMCIA LAN COR.
-
-Some slow PC card bridges work better if they never see a WAIT signal.
-This is configured by setting the 0x20 bit in the PCMCIA LAN COR.
-Only do this after testing that it is reliable and improves performance.
-
-The upper five bits of RunnerRdCtrl are used to window into PCcard
-configuration space registers. Window 0 is the regular Boomerang/Odie
-register set, 1-5 are various PC card control registers, and 16-31 are
-the (reversed!) CIS table.
-
-A final note: writing the InternalConfig register in window 3 with an
-invalid ramWidth is Very Bad.
-
-V. References
-
-http://www.scyld.com/expert/NWay.html
-http://www.national.com/opf/DP/DP83840A.html
-
-Thanks to Terry Murphy of 3Com for providing development information for
-earlier 3Com products.
-
-*/
-
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/init.h>
-#include <linux/slab.h>
-#include <linux/string.h>
-#include <linux/timer.h>
-#include <linux/interrupt.h>
-#include <linux/in.h>
-#include <linux/delay.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/skbuff.h>
-#include <linux/if_arp.h>
-#include <linux/ioport.h>
-#include <linux/bitops.h>
-#include <linux/mii.h>
-
-#include <pcmcia/cistpl.h>
-#include <pcmcia/cisreg.h>
-#include <pcmcia/ciscode.h>
-#include <pcmcia/ds.h>
-
-#include <asm/uaccess.h>
-#include <asm/io.h>
-#include <asm/system.h>
-
-/*====================================================================*/
-
-/* Module parameters */
-
-MODULE_AUTHOR("David Hinds <dahinds@users.sourceforge.net>");
-MODULE_DESCRIPTION("3Com 3c574 series PCMCIA ethernet driver");
-MODULE_LICENSE("GPL");
-
-#define INT_MODULE_PARM(n, v) static int n = v; module_param(n, int, 0)
-
-/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
-INT_MODULE_PARM(max_interrupt_work, 32);
-
-/* Force full duplex modes? */
-INT_MODULE_PARM(full_duplex, 0);
-
-/* Autodetect link polarity reversal? */
-INT_MODULE_PARM(auto_polarity, 1);
-
-
-/*====================================================================*/
-
-/* Time in jiffies before concluding the transmitter is hung. */
-#define TX_TIMEOUT ((800*HZ)/1000)
-
-/* To minimize the size of the driver source and make the driver more
- readable not all constants are symbolically defined.
- You'll need the manual if you want to understand driver details anyway. */
-/* Offsets from base I/O address. */
-#define EL3_DATA 0x00
-#define EL3_CMD 0x0e
-#define EL3_STATUS 0x0e
-
-#define EL3WINDOW(win_num) outw(SelectWindow + (win_num), ioaddr + EL3_CMD)
-
-/* The top five bits written to EL3_CMD are a command, the lower
- 11 bits are the parameter, if applicable. */
-enum el3_cmds {
- TotalReset = 0<<11, SelectWindow = 1<<11, StartCoax = 2<<11,
- RxDisable = 3<<11, RxEnable = 4<<11, RxReset = 5<<11, RxDiscard = 8<<11,
- TxEnable = 9<<11, TxDisable = 10<<11, TxReset = 11<<11,
- FakeIntr = 12<<11, AckIntr = 13<<11, SetIntrEnb = 14<<11,
- SetStatusEnb = 15<<11, SetRxFilter = 16<<11, SetRxThreshold = 17<<11,
- SetTxThreshold = 18<<11, SetTxStart = 19<<11, StatsEnable = 21<<11,
- StatsDisable = 22<<11, StopCoax = 23<<11,
-};
-
-enum elxl_status {
- IntLatch = 0x0001, AdapterFailure = 0x0002, TxComplete = 0x0004,
- TxAvailable = 0x0008, RxComplete = 0x0010, RxEarly = 0x0020,
- IntReq = 0x0040, StatsFull = 0x0080, CmdBusy = 0x1000 };
-
-/* The SetRxFilter command accepts the following classes: */
-enum RxFilter {
- RxStation = 1, RxMulticast = 2, RxBroadcast = 4, RxProm = 8
-};
-
-enum Window0 {
- Wn0EepromCmd = 10, Wn0EepromData = 12, /* EEPROM command/address, data. */
- IntrStatus=0x0E, /* Valid in all windows. */
-};
-/* These assumes the larger EEPROM. */
-enum Win0_EEPROM_cmds {
- EEPROM_Read = 0x200, EEPROM_WRITE = 0x100, EEPROM_ERASE = 0x300,
- EEPROM_EWENB = 0x30, /* Enable erasing/writing for 10 msec. */
- EEPROM_EWDIS = 0x00, /* Disable EWENB before 10 msec timeout. */
-};
-
-/* Register window 1 offsets, the window used in normal operation.
- On the "Odie" this window is always mapped at offsets 0x10-0x1f.
- Except for TxFree, which is overlapped by RunnerWrCtrl. */
-enum Window1 {
- TX_FIFO = 0x10, RX_FIFO = 0x10, RxErrors = 0x14,
- RxStatus = 0x18, Timer=0x1A, TxStatus = 0x1B,
- TxFree = 0x0C, /* Remaining free bytes in Tx buffer. */
- RunnerRdCtrl = 0x16, RunnerWrCtrl = 0x1c,
-};
-
-enum Window3 { /* Window 3: MAC/config bits. */
- Wn3_Config=0, Wn3_MAC_Ctrl=6, Wn3_Options=8,
-};
-enum wn3_config {
- Ram_size = 7,
- Ram_width = 8,
- Ram_speed = 0x30,
- Rom_size = 0xc0,
- Ram_split_shift = 16,
- Ram_split = 3 << Ram_split_shift,
- Xcvr_shift = 20,
- Xcvr = 7 << Xcvr_shift,
- Autoselect = 0x1000000,
-};
-
-enum Window4 { /* Window 4: Xcvr/media bits. */
- Wn4_FIFODiag = 4, Wn4_NetDiag = 6, Wn4_PhysicalMgmt=8, Wn4_Media = 10,
-};
-
-#define MEDIA_TP 0x00C0 /* Enable link beat and jabber for 10baseT. */
-
-struct el3_private {
- struct pcmcia_device *p_dev;
- u16 advertising, partner; /* NWay media advertisement */
- unsigned char phys; /* MII device address */
- unsigned int autoselect:1, default_media:3; /* Read from the EEPROM/Wn3_Config. */
- /* for transceiver monitoring */
- struct timer_list media;
- unsigned short media_status;
- unsigned short fast_poll;
- unsigned long last_irq;
- spinlock_t window_lock; /* Guards the Window selection */
-};
-
-/* Set iff a MII transceiver on any interface requires mdio preamble.
- This only set with the original DP83840 on older 3c905 boards, so the extra
- code size of a per-interface flag is not worthwhile. */
-static char mii_preamble_required = 0;
-
-/* Index of functions. */
-
-static int tc574_config(struct pcmcia_device *link);
-static void tc574_release(struct pcmcia_device *link);
-
-static void mdio_sync(unsigned int ioaddr, int bits);
-static int mdio_read(unsigned int ioaddr, int phy_id, int location);
-static void mdio_write(unsigned int ioaddr, int phy_id, int location,
- int value);
-static unsigned short read_eeprom(unsigned int ioaddr, int index);
-static void tc574_wait_for_completion(struct net_device *dev, int cmd);
-
-static void tc574_reset(struct net_device *dev);
-static void media_check(unsigned long arg);
-static int el3_open(struct net_device *dev);
-static netdev_tx_t el3_start_xmit(struct sk_buff *skb,
- struct net_device *dev);
-static irqreturn_t el3_interrupt(int irq, void *dev_id);
-static void update_stats(struct net_device *dev);
-static struct net_device_stats *el3_get_stats(struct net_device *dev);
-static int el3_rx(struct net_device *dev, int worklimit);
-static int el3_close(struct net_device *dev);
-static void el3_tx_timeout(struct net_device *dev);
-static int el3_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
-static void set_rx_mode(struct net_device *dev);
-static void set_multicast_list(struct net_device *dev);
-
-static void tc574_detach(struct pcmcia_device *p_dev);
-
-/*
- tc574_attach() creates an "instance" of the driver, allocating
- local data structures for one device. The device is registered
- with Card Services.
-*/
-static const struct net_device_ops el3_netdev_ops = {
- .ndo_open = el3_open,
- .ndo_stop = el3_close,
- .ndo_start_xmit = el3_start_xmit,
- .ndo_tx_timeout = el3_tx_timeout,
- .ndo_get_stats = el3_get_stats,
- .ndo_do_ioctl = el3_ioctl,
- .ndo_set_multicast_list = set_multicast_list,
- .ndo_change_mtu = eth_change_mtu,
- .ndo_set_mac_address = eth_mac_addr,
- .ndo_validate_addr = eth_validate_addr,
-};
-
-static int tc574_probe(struct pcmcia_device *link)
-{
- struct el3_private *lp;
- struct net_device *dev;
-
- dev_dbg(&link->dev, "3c574_attach()\n");
-
- /* Create the PC card device object. */
- dev = alloc_etherdev(sizeof(struct el3_private));
- if (!dev)
- return -ENOMEM;
- lp = netdev_priv(dev);
- link->priv = dev;
- lp->p_dev = link;
-
- spin_lock_init(&lp->window_lock);
- link->resource[0]->end = 32;
- link->resource[0]->flags |= IO_DATA_PATH_WIDTH_16;
- link->config_flags |= CONF_ENABLE_IRQ;
- link->config_index = 1;
-
- dev->netdev_ops = &el3_netdev_ops;
- dev->watchdog_timeo = TX_TIMEOUT;
-
- return tc574_config(link);
-}
-
-static void tc574_detach(struct pcmcia_device *link)
-{
- struct net_device *dev = link->priv;
-
- dev_dbg(&link->dev, "3c574_detach()\n");
-
- unregister_netdev(dev);
-
- tc574_release(link);
-
- free_netdev(dev);
-} /* tc574_detach */
-
-static const char *ram_split[] = {"5:3", "3:1", "1:1", "3:5"};
-
-static int tc574_config(struct pcmcia_device *link)
-{
- struct net_device *dev = link->priv;
- struct el3_private *lp = netdev_priv(dev);
- int ret, i, j;
- unsigned int ioaddr;
- __be16 *phys_addr;
- char *cardname;
- __u32 config;
- u8 *buf;
- size_t len;
-
- phys_addr = (__be16 *)dev->dev_addr;
-
- dev_dbg(&link->dev, "3c574_config()\n");
-
- link->io_lines = 16;
-
- for (i = j = 0; j < 0x400; j += 0x20) {
- link->resource[0]->start = j ^ 0x300;
- i = pcmcia_request_io(link);
- if (i == 0)
- break;
- }
- if (i != 0)
- goto failed;
-
- ret = pcmcia_request_irq(link, el3_interrupt);
- if (ret)
- goto failed;
-
- ret = pcmcia_enable_device(link);
- if (ret)
- goto failed;
-
- dev->irq = link->irq;
- dev->base_addr = link->resource[0]->start;
-
- ioaddr = dev->base_addr;
-
- /* The 3c574 normally uses an EEPROM for configuration info, including
- the hardware address. The future products may include a modem chip
- and put the address in the CIS. */
-
- len = pcmcia_get_tuple(link, 0x88, &buf);
- if (buf && len >= 6) {
- for (i = 0; i < 3; i++)
- phys_addr[i] = htons(le16_to_cpu(buf[i * 2]));
- kfree(buf);
- } else {
- kfree(buf); /* 0 < len < 6 */
- EL3WINDOW(0);
- for (i = 0; i < 3; i++)
- phys_addr[i] = htons(read_eeprom(ioaddr, i + 10));
- if (phys_addr[0] == htons(0x6060)) {
- pr_notice("IO port conflict at 0x%03lx-0x%03lx\n",
- dev->base_addr, dev->base_addr+15);
- goto failed;
- }
- }
- if (link->prod_id[1])
- cardname = link->prod_id[1];
- else
- cardname = "3Com 3c574";
-
- {
- u_char mcr;
- outw(2<<11, ioaddr + RunnerRdCtrl);
- mcr = inb(ioaddr + 2);
- outw(0<<11, ioaddr + RunnerRdCtrl);
- pr_info(" ASIC rev %d,", mcr>>3);
- EL3WINDOW(3);
- config = inl(ioaddr + Wn3_Config);
- lp->default_media = (config & Xcvr) >> Xcvr_shift;
- lp->autoselect = config & Autoselect ? 1 : 0;
- }
-
- init_timer(&lp->media);
-
- {
- int phy;
-
- /* Roadrunner only: Turn on the MII transceiver */
- outw(0x8040, ioaddr + Wn3_Options);
- mdelay(1);
- outw(0xc040, ioaddr + Wn3_Options);
- tc574_wait_for_completion(dev, TxReset);
- tc574_wait_for_completion(dev, RxReset);
- mdelay(1);
- outw(0x8040, ioaddr + Wn3_Options);
-
- EL3WINDOW(4);
- for (phy = 1; phy <= 32; phy++) {
- int mii_status;
- mdio_sync(ioaddr, 32);
- mii_status = mdio_read(ioaddr, phy & 0x1f, 1);
- if (mii_status != 0xffff) {
- lp->phys = phy & 0x1f;
- dev_dbg(&link->dev, " MII transceiver at "
- "index %d, status %x.\n",
- phy, mii_status);
- if ((mii_status & 0x0040) == 0)
- mii_preamble_required = 1;
- break;
- }
- }
- if (phy > 32) {
- pr_notice(" No MII transceivers found!\n");
- goto failed;
- }
- i = mdio_read(ioaddr, lp->phys, 16) | 0x40;
- mdio_write(ioaddr, lp->phys, 16, i);
- lp->advertising = mdio_read(ioaddr, lp->phys, 4);
- if (full_duplex) {
- /* Only advertise the FD media types. */
- lp->advertising &= ~0x02a0;
- mdio_write(ioaddr, lp->phys, 4, lp->advertising);
- }
- }
-
- SET_NETDEV_DEV(dev, &link->dev);
-
- if (register_netdev(dev) != 0) {
- pr_notice("register_netdev() failed\n");
- goto failed;
- }
-
- netdev_info(dev, "%s at io %#3lx, irq %d, hw_addr %pM\n",
- cardname, dev->base_addr, dev->irq, dev->dev_addr);
- netdev_info(dev, " %dK FIFO split %s Rx:Tx, %sMII interface.\n",
- 8 << config & Ram_size,
- ram_split[(config & Ram_split) >> Ram_split_shift],
- config & Autoselect ? "autoselect " : "");
-
- return 0;
-
-failed:
- tc574_release(link);
- return -ENODEV;
-
-} /* tc574_config */
-
-static void tc574_release(struct pcmcia_device *link)
-{
- pcmcia_disable_device(link);
-}
-
-static int tc574_suspend(struct pcmcia_device *link)
-{
- struct net_device *dev = link->priv;
-
- if (link->open)
- netif_device_detach(dev);
-
- return 0;
-}
-
-static int tc574_resume(struct pcmcia_device *link)
-{
- struct net_device *dev = link->priv;
-
- if (link->open) {
- tc574_reset(dev);
- netif_device_attach(dev);
- }
-
- return 0;
-}
-
-static void dump_status(struct net_device *dev)
-{
- unsigned int ioaddr = dev->base_addr;
- EL3WINDOW(1);
- netdev_info(dev, " irq status %04x, rx status %04x, tx status %02x, tx free %04x\n",
- inw(ioaddr+EL3_STATUS),
- inw(ioaddr+RxStatus), inb(ioaddr+TxStatus),
- inw(ioaddr+TxFree));
- EL3WINDOW(4);
- netdev_info(dev, " diagnostics: fifo %04x net %04x ethernet %04x media %04x\n",
- inw(ioaddr+0x04), inw(ioaddr+0x06),
- inw(ioaddr+0x08), inw(ioaddr+0x0a));
- EL3WINDOW(1);
-}
-
-/*
- Use this for commands that may take time to finish
-*/
-static void tc574_wait_for_completion(struct net_device *dev, int cmd)
-{
- int i = 1500;
- outw(cmd, dev->base_addr + EL3_CMD);
- while (--i > 0)
- if (!(inw(dev->base_addr + EL3_STATUS) & 0x1000)) break;
- if (i == 0)
- netdev_notice(dev, "command 0x%04x did not complete!\n", cmd);
-}
-
-/* Read a word from the EEPROM using the regular EEPROM access register.
- Assume that we are in register window zero.
- */
-static unsigned short read_eeprom(unsigned int ioaddr, int index)
-{
- int timer;
- outw(EEPROM_Read + index, ioaddr + Wn0EepromCmd);
- /* Pause for at least 162 usec for the read to take place. */
- for (timer = 1620; timer >= 0; timer--) {
- if ((inw(ioaddr + Wn0EepromCmd) & 0x8000) == 0)
- break;
- }
- return inw(ioaddr + Wn0EepromData);
-}
-
-/* MII transceiver control section.
- Read and write the MII registers using software-generated serial
- MDIO protocol. See the MII specifications or DP83840A data sheet
- for details.
- The maxium data clock rate is 2.5 Mhz. The timing is easily met by the
- slow PC card interface. */
-
-#define MDIO_SHIFT_CLK 0x01
-#define MDIO_DIR_WRITE 0x04
-#define MDIO_DATA_WRITE0 (0x00 | MDIO_DIR_WRITE)
-#define MDIO_DATA_WRITE1 (0x02 | MDIO_DIR_WRITE)
-#define MDIO_DATA_READ 0x02
-#define MDIO_ENB_IN 0x00
-
-/* Generate the preamble required for initial synchronization and
- a few older transceivers. */
-static void mdio_sync(unsigned int ioaddr, int bits)
-{
- unsigned int mdio_addr = ioaddr + Wn4_PhysicalMgmt;
-
- /* Establish sync by sending at least 32 logic ones. */
- while (-- bits >= 0) {
- outw(MDIO_DATA_WRITE1, mdio_addr);
- outw(MDIO_DATA_WRITE1 | MDIO_SHIFT_CLK, mdio_addr);
- }
-}
-
-static int mdio_read(unsigned int ioaddr, int phy_id, int location)
-{
- int i;
- int read_cmd = (0xf6 << 10) | (phy_id << 5) | location;
- unsigned int retval = 0;
- unsigned int mdio_addr = ioaddr + Wn4_PhysicalMgmt;
-
- if (mii_preamble_required)
- mdio_sync(ioaddr, 32);
-
- /* Shift the read command bits out. */
- for (i = 14; i >= 0; i--) {
- int dataval = (read_cmd&(1<<i)) ? MDIO_DATA_WRITE1 : MDIO_DATA_WRITE0;
- outw(dataval, mdio_addr);
- outw(dataval | MDIO_SHIFT_CLK, mdio_addr);
- }
- /* Read the two transition, 16 data, and wire-idle bits. */
- for (i = 19; i > 0; i--) {
- outw(MDIO_ENB_IN, mdio_addr);
- retval = (retval << 1) | ((inw(mdio_addr) & MDIO_DATA_READ) ? 1 : 0);
- outw(MDIO_ENB_IN | MDIO_SHIFT_CLK, mdio_addr);
- }
- return (retval>>1) & 0xffff;
-}
-
-static void mdio_write(unsigned int ioaddr, int phy_id, int location, int value)
-{
- int write_cmd = 0x50020000 | (phy_id << 23) | (location << 18) | value;
- unsigned int mdio_addr = ioaddr + Wn4_PhysicalMgmt;
- int i;
-
- if (mii_preamble_required)
- mdio_sync(ioaddr, 32);
-
- /* Shift the command bits out. */
- for (i = 31; i >= 0; i--) {
- int dataval = (write_cmd&(1<<i)) ? MDIO_DATA_WRITE1 : MDIO_DATA_WRITE0;
- outw(dataval, mdio_addr);
- outw(dataval | MDIO_SHIFT_CLK, mdio_addr);
- }
- /* Leave the interface idle. */
- for (i = 1; i >= 0; i--) {
- outw(MDIO_ENB_IN, mdio_addr);
- outw(MDIO_ENB_IN | MDIO_SHIFT_CLK, mdio_addr);
- }
-}
-
-/* Reset and restore all of the 3c574 registers. */
-static void tc574_reset(struct net_device *dev)
-{
- struct el3_private *lp = netdev_priv(dev);
- int i;
- unsigned int ioaddr = dev->base_addr;
- unsigned long flags;
-
- tc574_wait_for_completion(dev, TotalReset|0x10);
-
- spin_lock_irqsave(&lp->window_lock, flags);
- /* Clear any transactions in progress. */
- outw(0, ioaddr + RunnerWrCtrl);
- outw(0, ioaddr + RunnerRdCtrl);
-
- /* Set the station address and mask. */
- EL3WINDOW(2);
- for (i = 0; i < 6; i++)
- outb(dev->dev_addr[i], ioaddr + i);
- for (; i < 12; i+=2)
- outw(0, ioaddr + i);
-
- /* Reset config options */
- EL3WINDOW(3);
- outb((dev->mtu > 1500 ? 0x40 : 0), ioaddr + Wn3_MAC_Ctrl);
- outl((lp->autoselect ? 0x01000000 : 0) | 0x0062001b,
- ioaddr + Wn3_Config);
- /* Roadrunner only: Turn on the MII transceiver. */
- outw(0x8040, ioaddr + Wn3_Options);
- mdelay(1);
- outw(0xc040, ioaddr + Wn3_Options);
- EL3WINDOW(1);
- spin_unlock_irqrestore(&lp->window_lock, flags);
-
- tc574_wait_for_completion(dev, TxReset);
- tc574_wait_for_completion(dev, RxReset);
- mdelay(1);
- spin_lock_irqsave(&lp->window_lock, flags);
- EL3WINDOW(3);
- outw(0x8040, ioaddr + Wn3_Options);
-
- /* Switch to the stats window, and clear all stats by reading. */
- outw(StatsDisable, ioaddr + EL3_CMD);
- EL3WINDOW(6);
- for (i = 0; i < 10; i++)
- inb(ioaddr + i);
- inw(ioaddr + 10);
- inw(ioaddr + 12);
- EL3WINDOW(4);
- inb(ioaddr + 12);
- inb(ioaddr + 13);
-
- /* .. enable any extra statistics bits.. */
- outw(0x0040, ioaddr + Wn4_NetDiag);
-
- EL3WINDOW(1);
- spin_unlock_irqrestore(&lp->window_lock, flags);
-
- /* .. re-sync MII and re-fill what NWay is advertising. */
- mdio_sync(ioaddr, 32);
- mdio_write(ioaddr, lp->phys, 4, lp->advertising);
- if (!auto_polarity) {
- /* works for TDK 78Q2120 series MII's */
- i = mdio_read(ioaddr, lp->phys, 16) | 0x20;
- mdio_write(ioaddr, lp->phys, 16, i);
- }
-
- spin_lock_irqsave(&lp->window_lock, flags);
- /* Switch to register set 1 for normal use, just for TxFree. */
- set_rx_mode(dev);
- spin_unlock_irqrestore(&lp->window_lock, flags);
- outw(StatsEnable, ioaddr + EL3_CMD); /* Turn on statistics. */
- outw(RxEnable, ioaddr + EL3_CMD); /* Enable the receiver. */
- outw(TxEnable, ioaddr + EL3_CMD); /* Enable transmitter. */
- /* Allow status bits to be seen. */
- outw(SetStatusEnb | 0xff, ioaddr + EL3_CMD);
- /* Ack all pending events, and set active indicator mask. */
- outw(AckIntr | IntLatch | TxAvailable | RxEarly | IntReq,
- ioaddr + EL3_CMD);
- outw(SetIntrEnb | IntLatch | TxAvailable | RxComplete | StatsFull
- | AdapterFailure | RxEarly, ioaddr + EL3_CMD);
-}
-
-static int el3_open(struct net_device *dev)
-{
- struct el3_private *lp = netdev_priv(dev);
- struct pcmcia_device *link = lp->p_dev;
-
- if (!pcmcia_dev_present(link))
- return -ENODEV;
-
- link->open++;
- netif_start_queue(dev);
-
- tc574_reset(dev);
- lp->media.function = media_check;
- lp->media.data = (unsigned long) dev;
- lp->media.expires = jiffies + HZ;
- add_timer(&lp->media);
-
- dev_dbg(&link->dev, "%s: opened, status %4.4x.\n",
- dev->name, inw(dev->base_addr + EL3_STATUS));
-
- return 0;
-}
-
-static void el3_tx_timeout(struct net_device *dev)
-{
- unsigned int ioaddr = dev->base_addr;
-
- netdev_notice(dev, "Transmit timed out!\n");
- dump_status(dev);
- dev->stats.tx_errors++;
- dev->trans_start = jiffies; /* prevent tx timeout */
- /* Issue TX_RESET and TX_START commands. */
- tc574_wait_for_completion(dev, TxReset);
- outw(TxEnable, ioaddr + EL3_CMD);
- netif_wake_queue(dev);
-}
-
-static void pop_tx_status(struct net_device *dev)
-{
- unsigned int ioaddr = dev->base_addr;
- int i;
-
- /* Clear the Tx status stack. */
- for (i = 32; i > 0; i--) {
- u_char tx_status = inb(ioaddr + TxStatus);
- if (!(tx_status & 0x84))
- break;
- /* reset transmitter on jabber error or underrun */
- if (tx_status & 0x30)
- tc574_wait_for_completion(dev, TxReset);
- if (tx_status & 0x38) {
- pr_debug("%s: transmit error: status 0x%02x\n",
- dev->name, tx_status);
- outw(TxEnable, ioaddr + EL3_CMD);
- dev->stats.tx_aborted_errors++;
- }
- outb(0x00, ioaddr + TxStatus); /* Pop the status stack. */
- }
-}
-
-static netdev_tx_t el3_start_xmit(struct sk_buff *skb,
- struct net_device *dev)
-{
- unsigned int ioaddr = dev->base_addr;
- struct el3_private *lp = netdev_priv(dev);
- unsigned long flags;
-
- pr_debug("%s: el3_start_xmit(length = %ld) called, "
- "status %4.4x.\n", dev->name, (long)skb->len,
- inw(ioaddr + EL3_STATUS));
-
- spin_lock_irqsave(&lp->window_lock, flags);
-
- dev->stats.tx_bytes += skb->len;
-
- /* Put out the doubleword header... */
- outw(skb->len, ioaddr + TX_FIFO);
- outw(0, ioaddr + TX_FIFO);
- /* ... and the packet rounded to a doubleword. */
- outsl(ioaddr + TX_FIFO, skb->data, (skb->len+3)>>2);
-
- /* TxFree appears only in Window 1, not offset 0x1c. */
- if (inw(ioaddr + TxFree) <= 1536) {
- netif_stop_queue(dev);
- /* Interrupt us when the FIFO has room for max-sized packet.
- The threshold is in units of dwords. */
- outw(SetTxThreshold + (1536>>2), ioaddr + EL3_CMD);
- }
-
- pop_tx_status(dev);
- spin_unlock_irqrestore(&lp->window_lock, flags);
- dev_kfree_skb(skb);
- return NETDEV_TX_OK;
-}
-
-/* The EL3 interrupt handler. */
-static irqreturn_t el3_interrupt(int irq, void *dev_id)
-{
- struct net_device *dev = (struct net_device *) dev_id;
- struct el3_private *lp = netdev_priv(dev);
- unsigned int ioaddr;
- unsigned status;
- int work_budget = max_interrupt_work;
- int handled = 0;
-
- if (!netif_device_present(dev))
- return IRQ_NONE;
- ioaddr = dev->base_addr;
-
- pr_debug("%s: interrupt, status %4.4x.\n",
- dev->name, inw(ioaddr + EL3_STATUS));
-
- spin_lock(&lp->window_lock);
-
- while ((status = inw(ioaddr + EL3_STATUS)) &
- (IntLatch | RxComplete | RxEarly | StatsFull)) {
- if (!netif_device_present(dev) ||
- ((status & 0xe000) != 0x2000)) {
- pr_debug("%s: Interrupt from dead card\n", dev->name);
- break;
- }
-
- handled = 1;
-
- if (status & RxComplete)
- work_budget = el3_rx(dev, work_budget);
-
- if (status & TxAvailable) {
- pr_debug(" TX room bit was handled.\n");
- /* There's room in the FIFO for a full-sized packet. */
- outw(AckIntr | TxAvailable, ioaddr + EL3_CMD);
- netif_wake_queue(dev);
- }
-
- if (status & TxComplete)
- pop_tx_status(dev);
-
- if (status & (AdapterFailure | RxEarly | StatsFull)) {
- /* Handle all uncommon interrupts. */
- if (status & StatsFull)
- update_stats(dev);
- if (status & RxEarly) {
- work_budget = el3_rx(dev, work_budget);
- outw(AckIntr | RxEarly, ioaddr + EL3_CMD);
- }
- if (status & AdapterFailure) {
- u16 fifo_diag;
- EL3WINDOW(4);
- fifo_diag = inw(ioaddr + Wn4_FIFODiag);
- EL3WINDOW(1);
- netdev_notice(dev, "adapter failure, FIFO diagnostic register %04x\n",
- fifo_diag);
- if (fifo_diag & 0x0400) {
- /* Tx overrun */
- tc574_wait_for_completion(dev, TxReset);
- outw(TxEnable, ioaddr + EL3_CMD);
- }
- if (fifo_diag & 0x2000) {
- /* Rx underrun */
- tc574_wait_for_completion(dev, RxReset);
- set_rx_mode(dev);
- outw(RxEnable, ioaddr + EL3_CMD);
- }
- outw(AckIntr | AdapterFailure, ioaddr + EL3_CMD);
- }
- }
-
- if (--work_budget < 0) {
- pr_debug("%s: Too much work in interrupt, "
- "status %4.4x.\n", dev->name, status);
- /* Clear all interrupts */
- outw(AckIntr | 0xFF, ioaddr + EL3_CMD);
- break;
- }
- /* Acknowledge the IRQ. */
- outw(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
- }
-
- pr_debug("%s: exiting interrupt, status %4.4x.\n",
- dev->name, inw(ioaddr + EL3_STATUS));
-
- spin_unlock(&lp->window_lock);
- return IRQ_RETVAL(handled);
-}
-
-/*
- This timer serves two purposes: to check for missed interrupts
- (and as a last resort, poll the NIC for events), and to monitor
- the MII, reporting changes in cable status.
-*/
-static void media_check(unsigned long arg)
-{
- struct net_device *dev = (struct net_device *) arg;
- struct el3_private *lp = netdev_priv(dev);
- unsigned int ioaddr = dev->base_addr;
- unsigned long flags;
- unsigned short /* cable, */ media, partner;
-
- if (!netif_device_present(dev))
- goto reschedule;
-
- /* Check for pending interrupt with expired latency timer: with
- this, we can limp along even if the interrupt is blocked */
- if ((inw(ioaddr + EL3_STATUS) & IntLatch) && (inb(ioaddr + Timer) == 0xff)) {
- if (!lp->fast_poll)
- netdev_info(dev, "interrupt(s) dropped!\n");
-
- local_irq_save(flags);
- el3_interrupt(dev->irq, dev);
- local_irq_restore(flags);
-
- lp->fast_poll = HZ;
- }
- if (lp->fast_poll) {
- lp->fast_poll--;
- lp->media.expires = jiffies + 2*HZ/100;
- add_timer(&lp->media);
- return;
- }
-
- spin_lock_irqsave(&lp->window_lock, flags);
- EL3WINDOW(4);
- media = mdio_read(ioaddr, lp->phys, 1);
- partner = mdio_read(ioaddr, lp->phys, 5);
- EL3WINDOW(1);
-
- if (media != lp->media_status) {
- if ((media ^ lp->media_status) & 0x0004)
- netdev_info(dev, "%s link beat\n",
- (lp->media_status & 0x0004) ? "lost" : "found");
- if ((media ^ lp->media_status) & 0x0020) {
- lp->partner = 0;
- if (lp->media_status & 0x0020) {
- netdev_info(dev, "autonegotiation restarted\n");
- } else if (partner) {
- partner &= lp->advertising;
- lp->partner = partner;
- netdev_info(dev, "autonegotiation complete: "
- "%dbaseT-%cD selected\n",
- (partner & 0x0180) ? 100 : 10,
- (partner & 0x0140) ? 'F' : 'H');
- } else {
- netdev_info(dev, "link partner did not autonegotiate\n");
- }
-
- EL3WINDOW(3);
- outb((partner & 0x0140 ? 0x20 : 0) |
- (dev->mtu > 1500 ? 0x40 : 0), ioaddr + Wn3_MAC_Ctrl);
- EL3WINDOW(1);
-
- }
- if (media & 0x0010)
- netdev_info(dev, "remote fault detected\n");
- if (media & 0x0002)
- netdev_info(dev, "jabber detected\n");
- lp->media_status = media;
- }
- spin_unlock_irqrestore(&lp->window_lock, flags);
-
-reschedule:
- lp->media.expires = jiffies + HZ;
- add_timer(&lp->media);
-}
-
-static struct net_device_stats *el3_get_stats(struct net_device *dev)
-{
- struct el3_private *lp = netdev_priv(dev);
-
- if (netif_device_present(dev)) {
- unsigned long flags;
- spin_lock_irqsave(&lp->window_lock, flags);
- update_stats(dev);
- spin_unlock_irqrestore(&lp->window_lock, flags);
- }
- return &dev->stats;
-}
-
-/* Update statistics.
- Surprisingly this need not be run single-threaded, but it effectively is.
- The counters clear when read, so the adds must merely be atomic.
- */
-static void update_stats(struct net_device *dev)
-{
- unsigned int ioaddr = dev->base_addr;
- u8 rx, tx, up;
-
- pr_debug("%s: updating the statistics.\n", dev->name);
-
- if (inw(ioaddr+EL3_STATUS) == 0xffff) /* No card. */
- return;
-
- /* Unlike the 3c509 we need not turn off stats updates while reading. */
- /* Switch to the stats window, and read everything. */
- EL3WINDOW(6);
- dev->stats.tx_carrier_errors += inb(ioaddr + 0);
- dev->stats.tx_heartbeat_errors += inb(ioaddr + 1);
- /* Multiple collisions. */ inb(ioaddr + 2);
- dev->stats.collisions += inb(ioaddr + 3);
- dev->stats.tx_window_errors += inb(ioaddr + 4);
- dev->stats.rx_fifo_errors += inb(ioaddr + 5);
- dev->stats.tx_packets += inb(ioaddr + 6);
- up = inb(ioaddr + 9);
- dev->stats.tx_packets += (up&0x30) << 4;
- /* Rx packets */ inb(ioaddr + 7);
- /* Tx deferrals */ inb(ioaddr + 8);
- rx = inw(ioaddr + 10);
- tx = inw(ioaddr + 12);
-
- EL3WINDOW(4);
- /* BadSSD */ inb(ioaddr + 12);
- up = inb(ioaddr + 13);
-
- EL3WINDOW(1);
-}
-
-static int el3_rx(struct net_device *dev, int worklimit)
-{
- unsigned int ioaddr = dev->base_addr;
- short rx_status;
-
- pr_debug("%s: in rx_packet(), status %4.4x, rx_status %4.4x.\n",
- dev->name, inw(ioaddr+EL3_STATUS), inw(ioaddr+RxStatus));
- while (!((rx_status = inw(ioaddr + RxStatus)) & 0x8000) &&
- worklimit > 0) {
- worklimit--;
- if (rx_status & 0x4000) { /* Error, update stats. */
- short error = rx_status & 0x3800;
- dev->stats.rx_errors++;
- switch (error) {
- case 0x0000: dev->stats.rx_over_errors++; break;
- case 0x0800: dev->stats.rx_length_errors++; break;
- case 0x1000: dev->stats.rx_frame_errors++; break;
- case 0x1800: dev->stats.rx_length_errors++; break;
- case 0x2000: dev->stats.rx_frame_errors++; break;
- case 0x2800: dev->stats.rx_crc_errors++; break;
- }
- } else {
- short pkt_len = rx_status & 0x7ff;
- struct sk_buff *skb;
-
- skb = dev_alloc_skb(pkt_len+5);
-
- pr_debug(" Receiving packet size %d status %4.4x.\n",
- pkt_len, rx_status);
- if (skb != NULL) {
- skb_reserve(skb, 2);
- insl(ioaddr+RX_FIFO, skb_put(skb, pkt_len),
- ((pkt_len+3)>>2));
- skb->protocol = eth_type_trans(skb, dev);
- netif_rx(skb);
- dev->stats.rx_packets++;
- dev->stats.rx_bytes += pkt_len;
- } else {
- pr_debug("%s: couldn't allocate a sk_buff of"
- " size %d.\n", dev->name, pkt_len);
- dev->stats.rx_dropped++;
- }
- }
- tc574_wait_for_completion(dev, RxDiscard);
- }
-
- return worklimit;
-}
-
-/* Provide ioctl() calls to examine the MII xcvr state. */
-static int el3_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
-{
- struct el3_private *lp = netdev_priv(dev);
- unsigned int ioaddr = dev->base_addr;
- struct mii_ioctl_data *data = if_mii(rq);
- int phy = lp->phys & 0x1f;
-
- pr_debug("%s: In ioct(%-.6s, %#4.4x) %4.4x %4.4x %4.4x %4.4x.\n",
- dev->name, rq->ifr_ifrn.ifrn_name, cmd,
- data->phy_id, data->reg_num, data->val_in, data->val_out);
-
- switch(cmd) {
- case SIOCGMIIPHY: /* Get the address of the PHY in use. */
- data->phy_id = phy;
- case SIOCGMIIREG: /* Read the specified MII register. */
- {
- int saved_window;
- unsigned long flags;
-
- spin_lock_irqsave(&lp->window_lock, flags);
- saved_window = inw(ioaddr + EL3_CMD) >> 13;
- EL3WINDOW(4);
- data->val_out = mdio_read(ioaddr, data->phy_id & 0x1f,
- data->reg_num & 0x1f);
- EL3WINDOW(saved_window);
- spin_unlock_irqrestore(&lp->window_lock, flags);
- return 0;
- }
- case SIOCSMIIREG: /* Write the specified MII register */
- {
- int saved_window;
- unsigned long flags;
-
- spin_lock_irqsave(&lp->window_lock, flags);
- saved_window = inw(ioaddr + EL3_CMD) >> 13;
- EL3WINDOW(4);
- mdio_write(ioaddr, data->phy_id & 0x1f,
- data->reg_num & 0x1f, data->val_in);
- EL3WINDOW(saved_window);
- spin_unlock_irqrestore(&lp->window_lock, flags);
- return 0;
- }
- default:
- return -EOPNOTSUPP;
- }
-}
-
-/* The Odie chip has a 64 bin multicast filter, but the bit layout is not
- documented. Until it is we revert to receiving all multicast frames when
- any multicast reception is desired.
- Note: My other drivers emit a log message whenever promiscuous mode is
- entered to help detect password sniffers. This is less desirable on
- typical PC card machines, so we omit the message.
- */
-
-static void set_rx_mode(struct net_device *dev)
-{
- unsigned int ioaddr = dev->base_addr;
-
- if (dev->flags & IFF_PROMISC)
- outw(SetRxFilter | RxStation | RxMulticast | RxBroadcast | RxProm,
- ioaddr + EL3_CMD);
- else if (!netdev_mc_empty(dev) || (dev->flags & IFF_ALLMULTI))
- outw(SetRxFilter|RxStation|RxMulticast|RxBroadcast, ioaddr + EL3_CMD);
- else
- outw(SetRxFilter | RxStation | RxBroadcast, ioaddr + EL3_CMD);
-}
-
-static void set_multicast_list(struct net_device *dev)
-{
- struct el3_private *lp = netdev_priv(dev);
- unsigned long flags;
-
- spin_lock_irqsave(&lp->window_lock, flags);
- set_rx_mode(dev);
- spin_unlock_irqrestore(&lp->window_lock, flags);
-}
-
-static int el3_close(struct net_device *dev)
-{
- unsigned int ioaddr = dev->base_addr;
- struct el3_private *lp = netdev_priv(dev);
- struct pcmcia_device *link = lp->p_dev;
-
- dev_dbg(&link->dev, "%s: shutting down ethercard.\n", dev->name);
-
- if (pcmcia_dev_present(link)) {
- unsigned long flags;
-
- /* Turn off statistics ASAP. We update lp->stats below. */
- outw(StatsDisable, ioaddr + EL3_CMD);
-
- /* Disable the receiver and transmitter. */
- outw(RxDisable, ioaddr + EL3_CMD);
- outw(TxDisable, ioaddr + EL3_CMD);
-
- /* Note: Switching to window 0 may disable the IRQ. */
- EL3WINDOW(0);
- spin_lock_irqsave(&lp->window_lock, flags);
- update_stats(dev);
- spin_unlock_irqrestore(&lp->window_lock, flags);
-
- /* force interrupts off */
- outw(SetIntrEnb | 0x0000, ioaddr + EL3_CMD);
- }
-
- link->open--;
- netif_stop_queue(dev);
- del_timer_sync(&lp->media);
-
- return 0;
-}
-
-static const struct pcmcia_device_id tc574_ids[] = {
- PCMCIA_DEVICE_MANF_CARD(0x0101, 0x0574),
- PCMCIA_MFC_DEVICE_CIS_MANF_CARD(0, 0x0101, 0x0556, "cis/3CCFEM556.cis"),
- PCMCIA_DEVICE_NULL,
-};
-MODULE_DEVICE_TABLE(pcmcia, tc574_ids);
-
-static struct pcmcia_driver tc574_driver = {
- .owner = THIS_MODULE,
- .name = "3c574_cs",
- .probe = tc574_probe,
- .remove = tc574_detach,
- .id_table = tc574_ids,
- .suspend = tc574_suspend,
- .resume = tc574_resume,
-};
-
-static int __init init_tc574(void)
-{
- return pcmcia_register_driver(&tc574_driver);
-}
-
-static void __exit exit_tc574(void)
-{
- pcmcia_unregister_driver(&tc574_driver);
-}
-
-module_init(init_tc574);
-module_exit(exit_tc574);
+++ /dev/null
-/*======================================================================
-
- A PCMCIA ethernet driver for the 3com 3c589 card.
-
- Copyright (C) 1999 David A. Hinds -- dahinds@users.sourceforge.net
-
- 3c589_cs.c 1.162 2001/10/13 00:08:50
-
- The network driver code is based on Donald Becker's 3c589 code:
-
- Written 1994 by Donald Becker.
- Copyright 1993 United States Government as represented by the
- Director, National Security Agency. This software may be used and
- distributed according to the terms of the GNU General Public License,
- incorporated herein by reference.
- Donald Becker may be reached at becker@scyld.com
-
- Updated for 2.5.x by Alan Cox <alan@lxorguk.ukuu.org.uk>
-
-======================================================================*/
-
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#define DRV_NAME "3c589_cs"
-#define DRV_VERSION "1.162-ac"
-
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/kernel.h>
-#include <linux/ptrace.h>
-#include <linux/slab.h>
-#include <linux/string.h>
-#include <linux/timer.h>
-#include <linux/interrupt.h>
-#include <linux/in.h>
-#include <linux/delay.h>
-#include <linux/ethtool.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/skbuff.h>
-#include <linux/if_arp.h>
-#include <linux/ioport.h>
-#include <linux/bitops.h>
-#include <linux/jiffies.h>
-
-#include <pcmcia/cistpl.h>
-#include <pcmcia/cisreg.h>
-#include <pcmcia/ciscode.h>
-#include <pcmcia/ds.h>
-
-#include <asm/uaccess.h>
-#include <asm/io.h>
-#include <asm/system.h>
-
-/* To minimize the size of the driver source I only define operating
- constants if they are used several times. You'll need the manual
- if you want to understand driver details. */
-/* Offsets from base I/O address. */
-#define EL3_DATA 0x00
-#define EL3_TIMER 0x0a
-#define EL3_CMD 0x0e
-#define EL3_STATUS 0x0e
-
-#define EEPROM_READ 0x0080
-#define EEPROM_BUSY 0x8000
-
-#define EL3WINDOW(win_num) outw(SelectWindow + (win_num), ioaddr + EL3_CMD)
-
-/* The top five bits written to EL3_CMD are a command, the lower
- 11 bits are the parameter, if applicable. */
-enum c509cmd {
- TotalReset = 0<<11,
- SelectWindow = 1<<11,
- StartCoax = 2<<11,
- RxDisable = 3<<11,
- RxEnable = 4<<11,
- RxReset = 5<<11,
- RxDiscard = 8<<11,
- TxEnable = 9<<11,
- TxDisable = 10<<11,
- TxReset = 11<<11,
- FakeIntr = 12<<11,
- AckIntr = 13<<11,
- SetIntrEnb = 14<<11,
- SetStatusEnb = 15<<11,
- SetRxFilter = 16<<11,
- SetRxThreshold = 17<<11,
- SetTxThreshold = 18<<11,
- SetTxStart = 19<<11,
- StatsEnable = 21<<11,
- StatsDisable = 22<<11,
- StopCoax = 23<<11
-};
-
-enum c509status {
- IntLatch = 0x0001,
- AdapterFailure = 0x0002,
- TxComplete = 0x0004,
- TxAvailable = 0x0008,
- RxComplete = 0x0010,
- RxEarly = 0x0020,
- IntReq = 0x0040,
- StatsFull = 0x0080,
- CmdBusy = 0x1000
-};
-
-/* The SetRxFilter command accepts the following classes: */
-enum RxFilter {
- RxStation = 1,
- RxMulticast = 2,
- RxBroadcast = 4,
- RxProm = 8
-};
-
-/* Register window 1 offsets, the window used in normal operation. */
-#define TX_FIFO 0x00
-#define RX_FIFO 0x00
-#define RX_STATUS 0x08
-#define TX_STATUS 0x0B
-#define TX_FREE 0x0C /* Remaining free bytes in Tx buffer. */
-
-#define WN0_IRQ 0x08 /* Window 0: Set IRQ line in bits 12-15. */
-#define WN4_MEDIA 0x0A /* Window 4: Various transcvr/media bits. */
-#define MEDIA_TP 0x00C0 /* Enable link beat and jabber for 10baseT. */
-#define MEDIA_LED 0x0001 /* Enable link light on 3C589E cards. */
-
-/* Time in jiffies before concluding Tx hung */
-#define TX_TIMEOUT ((400*HZ)/1000)
-
-struct el3_private {
- struct pcmcia_device *p_dev;
- /* For transceiver monitoring */
- struct timer_list media;
- u16 media_status;
- u16 fast_poll;
- unsigned long last_irq;
- spinlock_t lock;
-};
-
-static const char *if_names[] = { "auto", "10baseT", "10base2", "AUI" };
-
-/*====================================================================*/
-
-/* Module parameters */
-
-MODULE_AUTHOR("David Hinds <dahinds@users.sourceforge.net>");
-MODULE_DESCRIPTION("3Com 3c589 series PCMCIA ethernet driver");
-MODULE_LICENSE("GPL");
-
-#define INT_MODULE_PARM(n, v) static int n = v; module_param(n, int, 0)
-
-/* Special hook for setting if_port when module is loaded */
-INT_MODULE_PARM(if_port, 0);
-
-
-/*====================================================================*/
-
-static int tc589_config(struct pcmcia_device *link);
-static void tc589_release(struct pcmcia_device *link);
-
-static u16 read_eeprom(unsigned int ioaddr, int index);
-static void tc589_reset(struct net_device *dev);
-static void media_check(unsigned long arg);
-static int el3_config(struct net_device *dev, struct ifmap *map);
-static int el3_open(struct net_device *dev);
-static netdev_tx_t el3_start_xmit(struct sk_buff *skb,
- struct net_device *dev);
-static irqreturn_t el3_interrupt(int irq, void *dev_id);
-static void update_stats(struct net_device *dev);
-static struct net_device_stats *el3_get_stats(struct net_device *dev);
-static int el3_rx(struct net_device *dev);
-static int el3_close(struct net_device *dev);
-static void el3_tx_timeout(struct net_device *dev);
-static void set_rx_mode(struct net_device *dev);
-static void set_multicast_list(struct net_device *dev);
-static const struct ethtool_ops netdev_ethtool_ops;
-
-static void tc589_detach(struct pcmcia_device *p_dev);
-
-static const struct net_device_ops el3_netdev_ops = {
- .ndo_open = el3_open,
- .ndo_stop = el3_close,
- .ndo_start_xmit = el3_start_xmit,
- .ndo_tx_timeout = el3_tx_timeout,
- .ndo_set_config = el3_config,
- .ndo_get_stats = el3_get_stats,
- .ndo_set_multicast_list = set_multicast_list,
- .ndo_change_mtu = eth_change_mtu,
- .ndo_set_mac_address = eth_mac_addr,
- .ndo_validate_addr = eth_validate_addr,
-};
-
-static int tc589_probe(struct pcmcia_device *link)
-{
- struct el3_private *lp;
- struct net_device *dev;
-
- dev_dbg(&link->dev, "3c589_attach()\n");
-
- /* Create new ethernet device */
- dev = alloc_etherdev(sizeof(struct el3_private));
- if (!dev)
- return -ENOMEM;
- lp = netdev_priv(dev);
- link->priv = dev;
- lp->p_dev = link;
-
- spin_lock_init(&lp->lock);
- link->resource[0]->end = 16;
- link->resource[0]->flags |= IO_DATA_PATH_WIDTH_16;
-
- link->config_flags |= CONF_ENABLE_IRQ;
- link->config_index = 1;
-
- dev->netdev_ops = &el3_netdev_ops;
- dev->watchdog_timeo = TX_TIMEOUT;
-
- SET_ETHTOOL_OPS(dev, &netdev_ethtool_ops);
-
- return tc589_config(link);
-}
-
-static void tc589_detach(struct pcmcia_device *link)
-{
- struct net_device *dev = link->priv;
-
- dev_dbg(&link->dev, "3c589_detach\n");
-
- unregister_netdev(dev);
-
- tc589_release(link);
-
- free_netdev(dev);
-} /* tc589_detach */
-
-static int tc589_config(struct pcmcia_device *link)
-{
- struct net_device *dev = link->priv;
- __be16 *phys_addr;
- int ret, i, j, multi = 0, fifo;
- unsigned int ioaddr;
- static const char * const ram_split[] = {"5:3", "3:1", "1:1", "3:5"};
- u8 *buf;
- size_t len;
-
- dev_dbg(&link->dev, "3c589_config\n");
-
- phys_addr = (__be16 *)dev->dev_addr;
- /* Is this a 3c562? */
- if (link->manf_id != MANFID_3COM)
- dev_info(&link->dev, "hmmm, is this really a 3Com card??\n");
- multi = (link->card_id == PRODID_3COM_3C562);
-
- link->io_lines = 16;
-
- /* For the 3c562, the base address must be xx00-xx7f */
- for (i = j = 0; j < 0x400; j += 0x10) {
- if (multi && (j & 0x80)) continue;
- link->resource[0]->start = j ^ 0x300;
- i = pcmcia_request_io(link);
- if (i == 0)
- break;
- }
- if (i != 0)
- goto failed;
-
- ret = pcmcia_request_irq(link, el3_interrupt);
- if (ret)
- goto failed;
-
- ret = pcmcia_enable_device(link);
- if (ret)
- goto failed;
-
- dev->irq = link->irq;
- dev->base_addr = link->resource[0]->start;
- ioaddr = dev->base_addr;
- EL3WINDOW(0);
-
- /* The 3c589 has an extra EEPROM for configuration info, including
- the hardware address. The 3c562 puts the address in the CIS. */
- len = pcmcia_get_tuple(link, 0x88, &buf);
- if (buf && len >= 6) {
- for (i = 0; i < 3; i++)
- phys_addr[i] = htons(le16_to_cpu(buf[i*2]));
- kfree(buf);
- } else {
- kfree(buf); /* 0 < len < 6 */
- for (i = 0; i < 3; i++)
- phys_addr[i] = htons(read_eeprom(ioaddr, i));
- if (phys_addr[0] == htons(0x6060)) {
- dev_err(&link->dev, "IO port conflict at 0x%03lx-0x%03lx\n",
- dev->base_addr, dev->base_addr+15);
- goto failed;
- }
- }
-
- /* The address and resource configuration register aren't loaded from
- the EEPROM and *must* be set to 0 and IRQ3 for the PCMCIA version. */
- outw(0x3f00, ioaddr + 8);
- fifo = inl(ioaddr);
-
- /* The if_port symbol can be set when the module is loaded */
- if ((if_port >= 0) && (if_port <= 3))
- dev->if_port = if_port;
- else
- dev_err(&link->dev, "invalid if_port requested\n");
-
- SET_NETDEV_DEV(dev, &link->dev);
-
- if (register_netdev(dev) != 0) {
- dev_err(&link->dev, "register_netdev() failed\n");
- goto failed;
- }
-
- netdev_info(dev, "3Com 3c%s, io %#3lx, irq %d, hw_addr %pM\n",
- (multi ? "562" : "589"), dev->base_addr, dev->irq,
- dev->dev_addr);
- netdev_info(dev, " %dK FIFO split %s Rx:Tx, %s xcvr\n",
- (fifo & 7) ? 32 : 8, ram_split[(fifo >> 16) & 3],
- if_names[dev->if_port]);
- return 0;
-
-failed:
- tc589_release(link);
- return -ENODEV;
-} /* tc589_config */
-
-static void tc589_release(struct pcmcia_device *link)
-{
- pcmcia_disable_device(link);
-}
-
-static int tc589_suspend(struct pcmcia_device *link)
-{
- struct net_device *dev = link->priv;
-
- if (link->open)
- netif_device_detach(dev);
-
- return 0;
-}
-
-static int tc589_resume(struct pcmcia_device *link)
-{
- struct net_device *dev = link->priv;
-
- if (link->open) {
- tc589_reset(dev);
- netif_device_attach(dev);
- }
-
- return 0;
-}
-
-/*====================================================================*/
-
-/*
- Use this for commands that may take time to finish
-*/
-static void tc589_wait_for_completion(struct net_device *dev, int cmd)
-{
- int i = 100;
- outw(cmd, dev->base_addr + EL3_CMD);
- while (--i > 0)
- if (!(inw(dev->base_addr + EL3_STATUS) & 0x1000)) break;
- if (i == 0)
- netdev_warn(dev, "command 0x%04x did not complete!\n", cmd);
-}
-
-/*
- Read a word from the EEPROM using the regular EEPROM access register.
- Assume that we are in register window zero.
-*/
-static u16 read_eeprom(unsigned int ioaddr, int index)
-{
- int i;
- outw(EEPROM_READ + index, ioaddr + 10);
- /* Reading the eeprom takes 162 us */
- for (i = 1620; i >= 0; i--)
- if ((inw(ioaddr + 10) & EEPROM_BUSY) == 0)
- break;
- return inw(ioaddr + 12);
-}
-
-/*
- Set transceiver type, perhaps to something other than what the user
- specified in dev->if_port.
-*/
-static void tc589_set_xcvr(struct net_device *dev, int if_port)
-{
- struct el3_private *lp = netdev_priv(dev);
- unsigned int ioaddr = dev->base_addr;
-
- EL3WINDOW(0);
- switch (if_port) {
- case 0: case 1: outw(0, ioaddr + 6); break;
- case 2: outw(3<<14, ioaddr + 6); break;
- case 3: outw(1<<14, ioaddr + 6); break;
- }
- /* On PCMCIA, this just turns on the LED */
- outw((if_port == 2) ? StartCoax : StopCoax, ioaddr + EL3_CMD);
- /* 10baseT interface, enable link beat and jabber check. */
- EL3WINDOW(4);
- outw(MEDIA_LED | ((if_port < 2) ? MEDIA_TP : 0), ioaddr + WN4_MEDIA);
- EL3WINDOW(1);
- if (if_port == 2)
- lp->media_status = ((dev->if_port == 0) ? 0x8000 : 0x4000);
- else
- lp->media_status = ((dev->if_port == 0) ? 0x4010 : 0x8800);
-}
-
-static void dump_status(struct net_device *dev)
-{
- unsigned int ioaddr = dev->base_addr;
- EL3WINDOW(1);
- netdev_info(dev, " irq status %04x, rx status %04x, tx status %02x tx free %04x\n",
- inw(ioaddr+EL3_STATUS), inw(ioaddr+RX_STATUS),
- inb(ioaddr+TX_STATUS), inw(ioaddr+TX_FREE));
- EL3WINDOW(4);
- netdev_info(dev, " diagnostics: fifo %04x net %04x ethernet %04x media %04x\n",
- inw(ioaddr+0x04), inw(ioaddr+0x06), inw(ioaddr+0x08),
- inw(ioaddr+0x0a));
- EL3WINDOW(1);
-}
-
-/* Reset and restore all of the 3c589 registers. */
-static void tc589_reset(struct net_device *dev)
-{
- unsigned int ioaddr = dev->base_addr;
- int i;
-
- EL3WINDOW(0);
- outw(0x0001, ioaddr + 4); /* Activate board. */
- outw(0x3f00, ioaddr + 8); /* Set the IRQ line. */
-
- /* Set the station address in window 2. */
- EL3WINDOW(2);
- for (i = 0; i < 6; i++)
- outb(dev->dev_addr[i], ioaddr + i);
-
- tc589_set_xcvr(dev, dev->if_port);
-
- /* Switch to the stats window, and clear all stats by reading. */
- outw(StatsDisable, ioaddr + EL3_CMD);
- EL3WINDOW(6);
- for (i = 0; i < 9; i++)
- inb(ioaddr+i);
- inw(ioaddr + 10);
- inw(ioaddr + 12);
-
- /* Switch to register set 1 for normal use. */
- EL3WINDOW(1);
-
- set_rx_mode(dev);
- outw(StatsEnable, ioaddr + EL3_CMD); /* Turn on statistics. */
- outw(RxEnable, ioaddr + EL3_CMD); /* Enable the receiver. */
- outw(TxEnable, ioaddr + EL3_CMD); /* Enable transmitter. */
- /* Allow status bits to be seen. */
- outw(SetStatusEnb | 0xff, ioaddr + EL3_CMD);
- /* Ack all pending events, and set active indicator mask. */
- outw(AckIntr | IntLatch | TxAvailable | RxEarly | IntReq,
- ioaddr + EL3_CMD);
- outw(SetIntrEnb | IntLatch | TxAvailable | RxComplete | StatsFull
- | AdapterFailure, ioaddr + EL3_CMD);
-}
-
-static void netdev_get_drvinfo(struct net_device *dev,
- struct ethtool_drvinfo *info)
-{
- strcpy(info->driver, DRV_NAME);
- strcpy(info->version, DRV_VERSION);
- sprintf(info->bus_info, "PCMCIA 0x%lx", dev->base_addr);
-}
-
-static const struct ethtool_ops netdev_ethtool_ops = {
- .get_drvinfo = netdev_get_drvinfo,
-};
-
-static int el3_config(struct net_device *dev, struct ifmap *map)
-{
- if ((map->port != (u_char)(-1)) && (map->port != dev->if_port)) {
- if (map->port <= 3) {
- dev->if_port = map->port;
- netdev_info(dev, "switched to %s port\n", if_names[dev->if_port]);
- tc589_set_xcvr(dev, dev->if_port);
- } else
- return -EINVAL;
- }
- return 0;
-}
-
-static int el3_open(struct net_device *dev)
-{
- struct el3_private *lp = netdev_priv(dev);
- struct pcmcia_device *link = lp->p_dev;
-
- if (!pcmcia_dev_present(link))
- return -ENODEV;
-
- link->open++;
- netif_start_queue(dev);
-
- tc589_reset(dev);
- init_timer(&lp->media);
- lp->media.function = media_check;
- lp->media.data = (unsigned long) dev;
- lp->media.expires = jiffies + HZ;
- add_timer(&lp->media);
-
- dev_dbg(&link->dev, "%s: opened, status %4.4x.\n",
- dev->name, inw(dev->base_addr + EL3_STATUS));
-
- return 0;
-}
-
-static void el3_tx_timeout(struct net_device *dev)
-{
- unsigned int ioaddr = dev->base_addr;
-
- netdev_warn(dev, "Transmit timed out!\n");
- dump_status(dev);
- dev->stats.tx_errors++;
- dev->trans_start = jiffies; /* prevent tx timeout */
- /* Issue TX_RESET and TX_START commands. */
- tc589_wait_for_completion(dev, TxReset);
- outw(TxEnable, ioaddr + EL3_CMD);
- netif_wake_queue(dev);
-}
-
-static void pop_tx_status(struct net_device *dev)
-{
- unsigned int ioaddr = dev->base_addr;
- int i;
-
- /* Clear the Tx status stack. */
- for (i = 32; i > 0; i--) {
- u_char tx_status = inb(ioaddr + TX_STATUS);
- if (!(tx_status & 0x84)) break;
- /* reset transmitter on jabber error or underrun */
- if (tx_status & 0x30)
- tc589_wait_for_completion(dev, TxReset);
- if (tx_status & 0x38) {
- netdev_dbg(dev, "transmit error: status 0x%02x\n", tx_status);
- outw(TxEnable, ioaddr + EL3_CMD);
- dev->stats.tx_aborted_errors++;
- }
- outb(0x00, ioaddr + TX_STATUS); /* Pop the status stack. */
- }
-}
-
-static netdev_tx_t el3_start_xmit(struct sk_buff *skb,
- struct net_device *dev)
-{
- unsigned int ioaddr = dev->base_addr;
- struct el3_private *priv = netdev_priv(dev);
- unsigned long flags;
-
- netdev_dbg(dev, "el3_start_xmit(length = %ld) called, status %4.4x.\n",
- (long)skb->len, inw(ioaddr + EL3_STATUS));
-
- spin_lock_irqsave(&priv->lock, flags);
-
- dev->stats.tx_bytes += skb->len;
-
- /* Put out the doubleword header... */
- outw(skb->len, ioaddr + TX_FIFO);
- outw(0x00, ioaddr + TX_FIFO);
- /* ... and the packet rounded to a doubleword. */
- outsl(ioaddr + TX_FIFO, skb->data, (skb->len + 3) >> 2);
-
- if (inw(ioaddr + TX_FREE) <= 1536) {
- netif_stop_queue(dev);
- /* Interrupt us when the FIFO has room for max-sized packet. */
- outw(SetTxThreshold + 1536, ioaddr + EL3_CMD);
- }
-
- pop_tx_status(dev);
- spin_unlock_irqrestore(&priv->lock, flags);
- dev_kfree_skb(skb);
-
- return NETDEV_TX_OK;
-}
-
-/* The EL3 interrupt handler. */
-static irqreturn_t el3_interrupt(int irq, void *dev_id)
-{
- struct net_device *dev = (struct net_device *) dev_id;
- struct el3_private *lp = netdev_priv(dev);
- unsigned int ioaddr;
- __u16 status;
- int i = 0, handled = 1;
-
- if (!netif_device_present(dev))
- return IRQ_NONE;
-
- ioaddr = dev->base_addr;
-
- netdev_dbg(dev, "interrupt, status %4.4x.\n", inw(ioaddr + EL3_STATUS));
-
- spin_lock(&lp->lock);
- while ((status = inw(ioaddr + EL3_STATUS)) &
- (IntLatch | RxComplete | StatsFull)) {
- if ((status & 0xe000) != 0x2000) {
- netdev_dbg(dev, "interrupt from dead card\n");
- handled = 0;
- break;
- }
- if (status & RxComplete)
- el3_rx(dev);
- if (status & TxAvailable) {
- netdev_dbg(dev, " TX room bit was handled.\n");
- /* There's room in the FIFO for a full-sized packet. */
- outw(AckIntr | TxAvailable, ioaddr + EL3_CMD);
- netif_wake_queue(dev);
- }
- if (status & TxComplete)
- pop_tx_status(dev);
- if (status & (AdapterFailure | RxEarly | StatsFull)) {
- /* Handle all uncommon interrupts. */
- if (status & StatsFull) /* Empty statistics. */
- update_stats(dev);
- if (status & RxEarly) { /* Rx early is unused. */
- el3_rx(dev);
- outw(AckIntr | RxEarly, ioaddr + EL3_CMD);
- }
- if (status & AdapterFailure) {
- u16 fifo_diag;
- EL3WINDOW(4);
- fifo_diag = inw(ioaddr + 4);
- EL3WINDOW(1);
- netdev_warn(dev, "adapter failure, FIFO diagnostic register %04x.\n",
- fifo_diag);
- if (fifo_diag & 0x0400) {
- /* Tx overrun */
- tc589_wait_for_completion(dev, TxReset);
- outw(TxEnable, ioaddr + EL3_CMD);
- }
- if (fifo_diag & 0x2000) {
- /* Rx underrun */
- tc589_wait_for_completion(dev, RxReset);
- set_rx_mode(dev);
- outw(RxEnable, ioaddr + EL3_CMD);
- }
- outw(AckIntr | AdapterFailure, ioaddr + EL3_CMD);
- }
- }
- if (++i > 10) {
- netdev_err(dev, "infinite loop in interrupt, status %4.4x.\n",
- status);
- /* Clear all interrupts */
- outw(AckIntr | 0xFF, ioaddr + EL3_CMD);
- break;
- }
- /* Acknowledge the IRQ. */
- outw(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
- }
- lp->last_irq = jiffies;
- spin_unlock(&lp->lock);
- netdev_dbg(dev, "exiting interrupt, status %4.4x.\n",
- inw(ioaddr + EL3_STATUS));
- return IRQ_RETVAL(handled);
-}
-
-static void media_check(unsigned long arg)
-{
- struct net_device *dev = (struct net_device *)(arg);
- struct el3_private *lp = netdev_priv(dev);
- unsigned int ioaddr = dev->base_addr;
- u16 media, errs;
- unsigned long flags;
-
- if (!netif_device_present(dev)) goto reschedule;
-
- /* Check for pending interrupt with expired latency timer: with
- this, we can limp along even if the interrupt is blocked */
- if ((inw(ioaddr + EL3_STATUS) & IntLatch) &&
- (inb(ioaddr + EL3_TIMER) == 0xff)) {
- if (!lp->fast_poll)
- netdev_warn(dev, "interrupt(s) dropped!\n");
-
- local_irq_save(flags);
- el3_interrupt(dev->irq, dev);
- local_irq_restore(flags);
-
- lp->fast_poll = HZ;
- }
- if (lp->fast_poll) {
- lp->fast_poll--;
- lp->media.expires = jiffies + HZ/100;
- add_timer(&lp->media);
- return;
- }
-
- /* lp->lock guards the EL3 window. Window should always be 1 except
- when the lock is held */
- spin_lock_irqsave(&lp->lock, flags);
- EL3WINDOW(4);
- media = inw(ioaddr+WN4_MEDIA) & 0xc810;
-
- /* Ignore collisions unless we've had no irq's recently */
- if (time_before(jiffies, lp->last_irq + HZ)) {
- media &= ~0x0010;
- } else {
- /* Try harder to detect carrier errors */
- EL3WINDOW(6);
- outw(StatsDisable, ioaddr + EL3_CMD);
- errs = inb(ioaddr + 0);
- outw(StatsEnable, ioaddr + EL3_CMD);
- dev->stats.tx_carrier_errors += errs;
- if (errs || (lp->media_status & 0x0010)) media |= 0x0010;
- }
-
- if (media != lp->media_status) {
- if ((media & lp->media_status & 0x8000) &&
- ((lp->media_status ^ media) & 0x0800))
- netdev_info(dev, "%s link beat\n",
- (lp->media_status & 0x0800 ? "lost" : "found"));
- else if ((media & lp->media_status & 0x4000) &&
- ((lp->media_status ^ media) & 0x0010))
- netdev_info(dev, "coax cable %s\n",
- (lp->media_status & 0x0010 ? "ok" : "problem"));
- if (dev->if_port == 0) {
- if (media & 0x8000) {
- if (media & 0x0800)
- netdev_info(dev, "flipped to 10baseT\n");
- else
- tc589_set_xcvr(dev, 2);
- } else if (media & 0x4000) {
- if (media & 0x0010)
- tc589_set_xcvr(dev, 1);
- else
- netdev_info(dev, "flipped to 10base2\n");
- }
- }
- lp->media_status = media;
- }
-
- EL3WINDOW(1);
- spin_unlock_irqrestore(&lp->lock, flags);
-
-reschedule:
- lp->media.expires = jiffies + HZ;
- add_timer(&lp->media);
-}
-
-static struct net_device_stats *el3_get_stats(struct net_device *dev)
-{
- struct el3_private *lp = netdev_priv(dev);
- unsigned long flags;
- struct pcmcia_device *link = lp->p_dev;
-
- if (pcmcia_dev_present(link)) {
- spin_lock_irqsave(&lp->lock, flags);
- update_stats(dev);
- spin_unlock_irqrestore(&lp->lock, flags);
- }
- return &dev->stats;
-}
-
-/*
- Update statistics. We change to register window 6, so this should be run
- single-threaded if the device is active. This is expected to be a rare
- operation, and it's simpler for the rest of the driver to assume that
- window 1 is always valid rather than use a special window-state variable.
-
- Caller must hold the lock for this
-*/
-static void update_stats(struct net_device *dev)
-{
- unsigned int ioaddr = dev->base_addr;
-
- netdev_dbg(dev, "updating the statistics.\n");
- /* Turn off statistics updates while reading. */
- outw(StatsDisable, ioaddr + EL3_CMD);
- /* Switch to the stats window, and read everything. */
- EL3WINDOW(6);
- dev->stats.tx_carrier_errors += inb(ioaddr + 0);
- dev->stats.tx_heartbeat_errors += inb(ioaddr + 1);
- /* Multiple collisions. */ inb(ioaddr + 2);
- dev->stats.collisions += inb(ioaddr + 3);
- dev->stats.tx_window_errors += inb(ioaddr + 4);
- dev->stats.rx_fifo_errors += inb(ioaddr + 5);
- dev->stats.tx_packets += inb(ioaddr + 6);
- /* Rx packets */ inb(ioaddr + 7);
- /* Tx deferrals */ inb(ioaddr + 8);
- /* Rx octets */ inw(ioaddr + 10);
- /* Tx octets */ inw(ioaddr + 12);
-
- /* Back to window 1, and turn statistics back on. */
- EL3WINDOW(1);
- outw(StatsEnable, ioaddr + EL3_CMD);
-}
-
-static int el3_rx(struct net_device *dev)
-{
- unsigned int ioaddr = dev->base_addr;
- int worklimit = 32;
- short rx_status;
-
- netdev_dbg(dev, "in rx_packet(), status %4.4x, rx_status %4.4x.\n",
- inw(ioaddr+EL3_STATUS), inw(ioaddr+RX_STATUS));
- while (!((rx_status = inw(ioaddr + RX_STATUS)) & 0x8000) &&
- worklimit > 0) {
- worklimit--;
- if (rx_status & 0x4000) { /* Error, update stats. */
- short error = rx_status & 0x3800;
- dev->stats.rx_errors++;
- switch (error) {
- case 0x0000: dev->stats.rx_over_errors++; break;
- case 0x0800: dev->stats.rx_length_errors++; break;
- case 0x1000: dev->stats.rx_frame_errors++; break;
- case 0x1800: dev->stats.rx_length_errors++; break;
- case 0x2000: dev->stats.rx_frame_errors++; break;
- case 0x2800: dev->stats.rx_crc_errors++; break;
- }
- } else {
- short pkt_len = rx_status & 0x7ff;
- struct sk_buff *skb;
-
- skb = dev_alloc_skb(pkt_len+5);
-
- netdev_dbg(dev, " Receiving packet size %d status %4.4x.\n",
- pkt_len, rx_status);
- if (skb != NULL) {
- skb_reserve(skb, 2);
- insl(ioaddr+RX_FIFO, skb_put(skb, pkt_len),
- (pkt_len+3)>>2);
- skb->protocol = eth_type_trans(skb, dev);
- netif_rx(skb);
- dev->stats.rx_packets++;
- dev->stats.rx_bytes += pkt_len;
- } else {
- netdev_dbg(dev, "couldn't allocate a sk_buff of size %d.\n",
- pkt_len);
- dev->stats.rx_dropped++;
- }
- }
- /* Pop the top of the Rx FIFO */
- tc589_wait_for_completion(dev, RxDiscard);
- }
- if (worklimit == 0)
- netdev_warn(dev, "too much work in el3_rx!\n");
- return 0;
-}
-
-static void set_rx_mode(struct net_device *dev)
-{
- unsigned int ioaddr = dev->base_addr;
- u16 opts = SetRxFilter | RxStation | RxBroadcast;
-
- if (dev->flags & IFF_PROMISC)
- opts |= RxMulticast | RxProm;
- else if (!netdev_mc_empty(dev) || (dev->flags & IFF_ALLMULTI))
- opts |= RxMulticast;
- outw(opts, ioaddr + EL3_CMD);
-}
-
-static void set_multicast_list(struct net_device *dev)
-{
- struct el3_private *priv = netdev_priv(dev);
- unsigned long flags;
-
- spin_lock_irqsave(&priv->lock, flags);
- set_rx_mode(dev);
- spin_unlock_irqrestore(&priv->lock, flags);
-}
-
-static int el3_close(struct net_device *dev)
-{
- struct el3_private *lp = netdev_priv(dev);
- struct pcmcia_device *link = lp->p_dev;
- unsigned int ioaddr = dev->base_addr;
-
- dev_dbg(&link->dev, "%s: shutting down ethercard.\n", dev->name);
-
- if (pcmcia_dev_present(link)) {
- /* Turn off statistics ASAP. We update dev->stats below. */
- outw(StatsDisable, ioaddr + EL3_CMD);
-
- /* Disable the receiver and transmitter. */
- outw(RxDisable, ioaddr + EL3_CMD);
- outw(TxDisable, ioaddr + EL3_CMD);
-
- if (dev->if_port == 2)
- /* Turn off thinnet power. Green! */
- outw(StopCoax, ioaddr + EL3_CMD);
- else if (dev->if_port == 1) {
- /* Disable link beat and jabber */
- EL3WINDOW(4);
- outw(0, ioaddr + WN4_MEDIA);
- }
-
- /* Switching back to window 0 disables the IRQ. */
- EL3WINDOW(0);
- /* But we explicitly zero the IRQ line select anyway. */
- outw(0x0f00, ioaddr + WN0_IRQ);
-
- /* Check if the card still exists */
- if ((inw(ioaddr+EL3_STATUS) & 0xe000) == 0x2000)
- update_stats(dev);
- }
-
- link->open--;
- netif_stop_queue(dev);
- del_timer_sync(&lp->media);
-
- return 0;
-}
-
-static const struct pcmcia_device_id tc589_ids[] = {
- PCMCIA_MFC_DEVICE_MANF_CARD(0, 0x0101, 0x0562),
- PCMCIA_MFC_DEVICE_PROD_ID1(0, "Motorola MARQUIS", 0xf03e4e77),
- PCMCIA_DEVICE_MANF_CARD(0x0101, 0x0589),
- PCMCIA_DEVICE_PROD_ID12("Farallon", "ENet", 0x58d93fc4, 0x992c2202),
- PCMCIA_MFC_DEVICE_CIS_MANF_CARD(0, 0x0101, 0x0035, "cis/3CXEM556.cis"),
- PCMCIA_MFC_DEVICE_CIS_MANF_CARD(0, 0x0101, 0x003d, "cis/3CXEM556.cis"),
- PCMCIA_DEVICE_NULL,
-};
-MODULE_DEVICE_TABLE(pcmcia, tc589_ids);
-
-static struct pcmcia_driver tc589_driver = {
- .owner = THIS_MODULE,
- .name = "3c589_cs",
- .probe = tc589_probe,
- .remove = tc589_detach,
- .id_table = tc589_ids,
- .suspend = tc589_suspend,
- .resume = tc589_resume,
-};
-
-static int __init init_tc589(void)
-{
- return pcmcia_register_driver(&tc589_driver);
-}
-
-static void __exit exit_tc589(void)
-{
- pcmcia_unregister_driver(&tc589_driver);
-}
-
-module_init(init_tc589);
-module_exit(exit_tc589);
if NET_PCMCIA && PCMCIA
-config PCMCIA_3C589
- tristate "3Com 3c589 PCMCIA support"
- help
- Say Y here if you intend to attach a 3Com 3c589 or compatible PCMCIA
- (PC-card) Ethernet card to your computer.
-
- To compile this driver as a module, choose M here: the module will be
- called 3c589_cs. If unsure, say N.
-
-config PCMCIA_3C574
- tristate "3Com 3c574 PCMCIA support"
- help
- Say Y here if you intend to attach a 3Com 3c574 or compatible PCMCIA
- (PC-card) Fast Ethernet card to your computer.
-
- To compile this driver as a module, choose M here: the module will be
- called 3c574_cs. If unsure, say N.
-
config PCMCIA_FMVJ18X
tristate "Fujitsu FMV-J18x PCMCIA support"
select CRC32
#
# 16-bit client drivers
-obj-$(CONFIG_PCMCIA_3C589) += 3c589_cs.o
-obj-$(CONFIG_PCMCIA_3C574) += 3c574_cs.o
obj-$(CONFIG_PCMCIA_FMVJ18X) += fmvj18x_cs.o
obj-$(CONFIG_PCMCIA_NMCLAN) += nmclan_cs.o
obj-$(CONFIG_PCMCIA_PCNET) += pcnet_cs.o
+++ /dev/null
-/* typhoon.c: A Linux Ethernet device driver for 3Com 3CR990 family of NICs */
-/*
- Written 2002-2004 by David Dillow <dave@thedillows.org>
- Based on code written 1998-2000 by Donald Becker <becker@scyld.com> and
- Linux 2.2.x driver by David P. McLean <davidpmclean@yahoo.com>.
-
- This software may be used and distributed according to the terms of
- the GNU General Public License (GPL), incorporated herein by reference.
- Drivers based on or derived from this code fall under the GPL and must
- retain the authorship, copyright and license notice. This file is not
- a complete program and may only be used when the entire operating
- system is licensed under the GPL.
-
- This software is available on a public web site. It may enable
- cryptographic capabilities of the 3Com hardware, and may be
- exported from the United States under License Exception "TSU"
- pursuant to 15 C.F.R. Section 740.13(e).
-
- This work was funded by the National Library of Medicine under
- the Department of Energy project number 0274DD06D1 and NLM project
- number Y1-LM-2015-01.
-
- This driver is designed for the 3Com 3CR990 Family of cards with the
- 3XP Processor. It has been tested on x86 and sparc64.
-
- KNOWN ISSUES:
- *) Cannot DMA Rx packets to a 2 byte aligned address. Also firmware
- issue. Hopefully 3Com will fix it.
- *) Waiting for a command response takes 8ms due to non-preemptable
- polling. Only significant for getting stats and creating
- SAs, but an ugly wart never the less.
-
- TODO:
- *) Doesn't do IPSEC offloading. Yet. Keep yer pants on, it's coming.
- *) Add more support for ethtool (especially for NIC stats)
- *) Allow disabling of RX checksum offloading
- *) Fix MAC changing to work while the interface is up
- (Need to put commands on the TX ring, which changes
- the locking)
- *) Add in FCS to {rx,tx}_bytes, since the hardware doesn't. See
- http://oss.sgi.com/cgi-bin/mesg.cgi?a=netdev&i=20031215152211.7003fe8e.rddunlap%40osdl.org
-*/
-
-/* Set the copy breakpoint for the copy-only-tiny-frames scheme.
- * Setting to > 1518 effectively disables this feature.
- */
-static int rx_copybreak = 200;
-
-/* Should we use MMIO or Port IO?
- * 0: Port IO
- * 1: MMIO
- * 2: Try MMIO, fallback to Port IO
- */
-static unsigned int use_mmio = 2;
-
-/* end user-configurable values */
-
-/* Maximum number of multicast addresses to filter (vs. rx-all-multicast).
- */
-static const int multicast_filter_limit = 32;
-
-/* Operational parameters that are set at compile time. */
-
-/* Keep the ring sizes a power of two for compile efficiency.
- * The compiler will convert <unsigned>'%'<2^N> into a bit mask.
- * Making the Tx ring too large decreases the effectiveness of channel
- * bonding and packet priority.
- * There are no ill effects from too-large receive rings.
- *
- * We don't currently use the Hi Tx ring so, don't make it very big.
- *
- * Beware that if we start using the Hi Tx ring, we will need to change
- * typhoon_num_free_tx() and typhoon_tx_complete() to account for that.
- */
-#define TXHI_ENTRIES 2
-#define TXLO_ENTRIES 128
-#define RX_ENTRIES 32
-#define COMMAND_ENTRIES 16
-#define RESPONSE_ENTRIES 32
-
-#define COMMAND_RING_SIZE (COMMAND_ENTRIES * sizeof(struct cmd_desc))
-#define RESPONSE_RING_SIZE (RESPONSE_ENTRIES * sizeof(struct resp_desc))
-
-/* The 3XP will preload and remove 64 entries from the free buffer
- * list, and we need one entry to keep the ring from wrapping, so
- * to keep this a power of two, we use 128 entries.
- */
-#define RXFREE_ENTRIES 128
-#define RXENT_ENTRIES (RXFREE_ENTRIES - 1)
-
-/* Operational parameters that usually are not changed. */
-
-/* Time in jiffies before concluding the transmitter is hung. */
-#define TX_TIMEOUT (2*HZ)
-
-#define PKT_BUF_SZ 1536
-#define FIRMWARE_NAME "3com/typhoon.bin"
-
-#define pr_fmt(fmt) KBUILD_MODNAME " " fmt
-
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/sched.h>
-#include <linux/string.h>
-#include <linux/timer.h>
-#include <linux/errno.h>
-#include <linux/ioport.h>
-#include <linux/interrupt.h>
-#include <linux/pci.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/skbuff.h>
-#include <linux/mm.h>
-#include <linux/init.h>
-#include <linux/delay.h>
-#include <linux/ethtool.h>
-#include <linux/if_vlan.h>
-#include <linux/crc32.h>
-#include <linux/bitops.h>
-#include <asm/processor.h>
-#include <asm/io.h>
-#include <asm/uaccess.h>
-#include <linux/in6.h>
-#include <linux/dma-mapping.h>
-#include <linux/firmware.h>
-
-#include "typhoon.h"
-
-MODULE_AUTHOR("David Dillow <dave@thedillows.org>");
-MODULE_VERSION("1.0");
-MODULE_LICENSE("GPL");
-MODULE_FIRMWARE(FIRMWARE_NAME);
-MODULE_DESCRIPTION("3Com Typhoon Family (3C990, 3CR990, and variants)");
-MODULE_PARM_DESC(rx_copybreak, "Packets smaller than this are copied and "
- "the buffer given back to the NIC. Default "
- "is 200.");
-MODULE_PARM_DESC(use_mmio, "Use MMIO (1) or PIO(0) to access the NIC. "
- "Default is to try MMIO and fallback to PIO.");
-module_param(rx_copybreak, int, 0);
-module_param(use_mmio, int, 0);
-
-#if defined(NETIF_F_TSO) && MAX_SKB_FRAGS > 32
-#warning Typhoon only supports 32 entries in its SG list for TSO, disabling TSO
-#undef NETIF_F_TSO
-#endif
-
-#if TXLO_ENTRIES <= (2 * MAX_SKB_FRAGS)
-#error TX ring too small!
-#endif
-
-struct typhoon_card_info {
- const char *name;
- const int capabilities;
-};
-
-#define TYPHOON_CRYPTO_NONE 0x00
-#define TYPHOON_CRYPTO_DES 0x01
-#define TYPHOON_CRYPTO_3DES 0x02
-#define TYPHOON_CRYPTO_VARIABLE 0x04
-#define TYPHOON_FIBER 0x08
-#define TYPHOON_WAKEUP_NEEDS_RESET 0x10
-
-enum typhoon_cards {
- TYPHOON_TX = 0, TYPHOON_TX95, TYPHOON_TX97, TYPHOON_SVR,
- TYPHOON_SVR95, TYPHOON_SVR97, TYPHOON_TXM, TYPHOON_BSVR,
- TYPHOON_FX95, TYPHOON_FX97, TYPHOON_FX95SVR, TYPHOON_FX97SVR,
- TYPHOON_FXM,
-};
-
-/* directly indexed by enum typhoon_cards, above */
-static struct typhoon_card_info typhoon_card_info[] __devinitdata = {
- { "3Com Typhoon (3C990-TX)",
- TYPHOON_CRYPTO_NONE},
- { "3Com Typhoon (3CR990-TX-95)",
- TYPHOON_CRYPTO_DES},
- { "3Com Typhoon (3CR990-TX-97)",
- TYPHOON_CRYPTO_DES | TYPHOON_CRYPTO_3DES},
- { "3Com Typhoon (3C990SVR)",
- TYPHOON_CRYPTO_NONE},
- { "3Com Typhoon (3CR990SVR95)",
- TYPHOON_CRYPTO_DES},
- { "3Com Typhoon (3CR990SVR97)",
- TYPHOON_CRYPTO_DES | TYPHOON_CRYPTO_3DES},
- { "3Com Typhoon2 (3C990B-TX-M)",
- TYPHOON_CRYPTO_VARIABLE},
- { "3Com Typhoon2 (3C990BSVR)",
- TYPHOON_CRYPTO_VARIABLE},
- { "3Com Typhoon (3CR990-FX-95)",
- TYPHOON_CRYPTO_DES | TYPHOON_FIBER},
- { "3Com Typhoon (3CR990-FX-97)",
- TYPHOON_CRYPTO_DES | TYPHOON_CRYPTO_3DES | TYPHOON_FIBER},
- { "3Com Typhoon (3CR990-FX-95 Server)",
- TYPHOON_CRYPTO_DES | TYPHOON_FIBER},
- { "3Com Typhoon (3CR990-FX-97 Server)",
- TYPHOON_CRYPTO_DES | TYPHOON_CRYPTO_3DES | TYPHOON_FIBER},
- { "3Com Typhoon2 (3C990B-FX-97)",
- TYPHOON_CRYPTO_VARIABLE | TYPHOON_FIBER},
-};
-
-/* Notes on the new subsystem numbering scheme:
- * bits 0-1 indicate crypto capabilities: (0) variable, (1) DES, or (2) 3DES
- * bit 4 indicates if this card has secured firmware (we don't support it)
- * bit 8 indicates if this is a (0) copper or (1) fiber card
- * bits 12-16 indicate card type: (0) client and (1) server
- */
-static DEFINE_PCI_DEVICE_TABLE(typhoon_pci_tbl) = {
- { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990,
- PCI_ANY_ID, PCI_ANY_ID, 0, 0,TYPHOON_TX },
- { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990_TX_95,
- PCI_ANY_ID, PCI_ANY_ID, 0, 0, TYPHOON_TX95 },
- { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990_TX_97,
- PCI_ANY_ID, PCI_ANY_ID, 0, 0, TYPHOON_TX97 },
- { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990B,
- PCI_ANY_ID, 0x1000, 0, 0, TYPHOON_TXM },
- { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990B,
- PCI_ANY_ID, 0x1102, 0, 0, TYPHOON_FXM },
- { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990B,
- PCI_ANY_ID, 0x2000, 0, 0, TYPHOON_BSVR },
- { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990_FX,
- PCI_ANY_ID, 0x1101, 0, 0, TYPHOON_FX95 },
- { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990_FX,
- PCI_ANY_ID, 0x1102, 0, 0, TYPHOON_FX97 },
- { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990_FX,
- PCI_ANY_ID, 0x2101, 0, 0, TYPHOON_FX95SVR },
- { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990_FX,
- PCI_ANY_ID, 0x2102, 0, 0, TYPHOON_FX97SVR },
- { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990SVR95,
- PCI_ANY_ID, PCI_ANY_ID, 0, 0, TYPHOON_SVR95 },
- { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990SVR97,
- PCI_ANY_ID, PCI_ANY_ID, 0, 0, TYPHOON_SVR97 },
- { PCI_VENDOR_ID_3COM, PCI_DEVICE_ID_3COM_3CR990SVR,
- PCI_ANY_ID, PCI_ANY_ID, 0, 0, TYPHOON_SVR },
- { 0, }
-};
-MODULE_DEVICE_TABLE(pci, typhoon_pci_tbl);
-
-/* Define the shared memory area
- * Align everything the 3XP will normally be using.
- * We'll need to move/align txHi if we start using that ring.
- */
-#define __3xp_aligned ____cacheline_aligned
-struct typhoon_shared {
- struct typhoon_interface iface;
- struct typhoon_indexes indexes __3xp_aligned;
- struct tx_desc txLo[TXLO_ENTRIES] __3xp_aligned;
- struct rx_desc rxLo[RX_ENTRIES] __3xp_aligned;
- struct rx_desc rxHi[RX_ENTRIES] __3xp_aligned;
- struct cmd_desc cmd[COMMAND_ENTRIES] __3xp_aligned;
- struct resp_desc resp[RESPONSE_ENTRIES] __3xp_aligned;
- struct rx_free rxBuff[RXFREE_ENTRIES] __3xp_aligned;
- u32 zeroWord;
- struct tx_desc txHi[TXHI_ENTRIES];
-} __packed;
-
-struct rxbuff_ent {
- struct sk_buff *skb;
- dma_addr_t dma_addr;
-};
-
-struct typhoon {
- /* Tx cache line section */
- struct transmit_ring txLoRing ____cacheline_aligned;
- struct pci_dev * tx_pdev;
- void __iomem *tx_ioaddr;
- u32 txlo_dma_addr;
-
- /* Irq/Rx cache line section */
- void __iomem *ioaddr ____cacheline_aligned;
- struct typhoon_indexes *indexes;
- u8 awaiting_resp;
- u8 duplex;
- u8 speed;
- u8 card_state;
- struct basic_ring rxLoRing;
- struct pci_dev * pdev;
- struct net_device * dev;
- struct napi_struct napi;
- struct basic_ring rxHiRing;
- struct basic_ring rxBuffRing;
- struct rxbuff_ent rxbuffers[RXENT_ENTRIES];
-
- /* general section */
- spinlock_t command_lock ____cacheline_aligned;
- struct basic_ring cmdRing;
- struct basic_ring respRing;
- struct net_device_stats stats;
- struct net_device_stats stats_saved;
- struct typhoon_shared * shared;
- dma_addr_t shared_dma;
- __le16 xcvr_select;
- __le16 wol_events;
- __le32 offload;
-
- /* unused stuff (future use) */
- int capabilities;
- struct transmit_ring txHiRing;
-};
-
-enum completion_wait_values {
- NoWait = 0, WaitNoSleep, WaitSleep,
-};
-
-/* These are the values for the typhoon.card_state variable.
- * These determine where the statistics will come from in get_stats().
- * The sleep image does not support the statistics we need.
- */
-enum state_values {
- Sleeping = 0, Running,
-};
-
-/* PCI writes are not guaranteed to be posted in order, but outstanding writes
- * cannot pass a read, so this forces current writes to post.
- */
-#define typhoon_post_pci_writes(x) \
- do { if(likely(use_mmio)) ioread32(x+TYPHOON_REG_HEARTBEAT); } while(0)
-
-/* We'll wait up to six seconds for a reset, and half a second normally.
- */
-#define TYPHOON_UDELAY 50
-#define TYPHOON_RESET_TIMEOUT_SLEEP (6 * HZ)
-#define TYPHOON_RESET_TIMEOUT_NOSLEEP ((6 * 1000000) / TYPHOON_UDELAY)
-#define TYPHOON_WAIT_TIMEOUT ((1000000 / 2) / TYPHOON_UDELAY)
-
-#if defined(NETIF_F_TSO)
-#define skb_tso_size(x) (skb_shinfo(x)->gso_size)
-#define TSO_NUM_DESCRIPTORS 2
-#define TSO_OFFLOAD_ON TYPHOON_OFFLOAD_TCP_SEGMENT
-#else
-#define NETIF_F_TSO 0
-#define skb_tso_size(x) 0
-#define TSO_NUM_DESCRIPTORS 0
-#define TSO_OFFLOAD_ON 0
-#endif
-
-static inline void
-typhoon_inc_index(u32 *index, const int count, const int num_entries)
-{
- /* Increment a ring index -- we can use this for all rings execept
- * the Rx rings, as they use different size descriptors
- * otherwise, everything is the same size as a cmd_desc
- */
- *index += count * sizeof(struct cmd_desc);
- *index %= num_entries * sizeof(struct cmd_desc);
-}
-
-static inline void
-typhoon_inc_cmd_index(u32 *index, const int count)
-{
- typhoon_inc_index(index, count, COMMAND_ENTRIES);
-}
-
-static inline void
-typhoon_inc_resp_index(u32 *index, const int count)
-{
- typhoon_inc_index(index, count, RESPONSE_ENTRIES);
-}
-
-static inline void
-typhoon_inc_rxfree_index(u32 *index, const int count)
-{
- typhoon_inc_index(index, count, RXFREE_ENTRIES);
-}
-
-static inline void
-typhoon_inc_tx_index(u32 *index, const int count)
-{
- /* if we start using the Hi Tx ring, this needs updateing */
- typhoon_inc_index(index, count, TXLO_ENTRIES);
-}
-
-static inline void
-typhoon_inc_rx_index(u32 *index, const int count)
-{
- /* sizeof(struct rx_desc) != sizeof(struct cmd_desc) */
- *index += count * sizeof(struct rx_desc);
- *index %= RX_ENTRIES * sizeof(struct rx_desc);
-}
-
-static int
-typhoon_reset(void __iomem *ioaddr, int wait_type)
-{
- int i, err = 0;
- int timeout;
-
- if(wait_type == WaitNoSleep)
- timeout = TYPHOON_RESET_TIMEOUT_NOSLEEP;
- else
- timeout = TYPHOON_RESET_TIMEOUT_SLEEP;
-
- iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_MASK);
- iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_STATUS);
-
- iowrite32(TYPHOON_RESET_ALL, ioaddr + TYPHOON_REG_SOFT_RESET);
- typhoon_post_pci_writes(ioaddr);
- udelay(1);
- iowrite32(TYPHOON_RESET_NONE, ioaddr + TYPHOON_REG_SOFT_RESET);
-
- if(wait_type != NoWait) {
- for(i = 0; i < timeout; i++) {
- if(ioread32(ioaddr + TYPHOON_REG_STATUS) ==
- TYPHOON_STATUS_WAITING_FOR_HOST)
- goto out;
-
- if(wait_type == WaitSleep)
- schedule_timeout_uninterruptible(1);
- else
- udelay(TYPHOON_UDELAY);
- }
-
- err = -ETIMEDOUT;
- }
-
-out:
- iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_MASK);
- iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_STATUS);
-
- /* The 3XP seems to need a little extra time to complete the load
- * of the sleep image before we can reliably boot it. Failure to
- * do this occasionally results in a hung adapter after boot in
- * typhoon_init_one() while trying to read the MAC address or
- * putting the card to sleep. 3Com's driver waits 5ms, but
- * that seems to be overkill. However, if we can sleep, we might
- * as well give it that much time. Otherwise, we'll give it 500us,
- * which should be enough (I've see it work well at 100us, but still
- * saw occasional problems.)
- */
- if(wait_type == WaitSleep)
- msleep(5);
- else
- udelay(500);
- return err;
-}
-
-static int
-typhoon_wait_status(void __iomem *ioaddr, u32 wait_value)
-{
- int i, err = 0;
-
- for(i = 0; i < TYPHOON_WAIT_TIMEOUT; i++) {
- if(ioread32(ioaddr + TYPHOON_REG_STATUS) == wait_value)
- goto out;
- udelay(TYPHOON_UDELAY);
- }
-
- err = -ETIMEDOUT;
-
-out:
- return err;
-}
-
-static inline void
-typhoon_media_status(struct net_device *dev, struct resp_desc *resp)
-{
- if(resp->parm1 & TYPHOON_MEDIA_STAT_NO_LINK)
- netif_carrier_off(dev);
- else
- netif_carrier_on(dev);
-}
-
-static inline void
-typhoon_hello(struct typhoon *tp)
-{
- struct basic_ring *ring = &tp->cmdRing;
- struct cmd_desc *cmd;
-
- /* We only get a hello request if we've not sent anything to the
- * card in a long while. If the lock is held, then we're in the
- * process of issuing a command, so we don't need to respond.
- */
- if(spin_trylock(&tp->command_lock)) {
- cmd = (struct cmd_desc *)(ring->ringBase + ring->lastWrite);
- typhoon_inc_cmd_index(&ring->lastWrite, 1);
-
- INIT_COMMAND_NO_RESPONSE(cmd, TYPHOON_CMD_HELLO_RESP);
- wmb();
- iowrite32(ring->lastWrite, tp->ioaddr + TYPHOON_REG_CMD_READY);
- spin_unlock(&tp->command_lock);
- }
-}
-
-static int
-typhoon_process_response(struct typhoon *tp, int resp_size,
- struct resp_desc *resp_save)
-{
- struct typhoon_indexes *indexes = tp->indexes;
- struct resp_desc *resp;
- u8 *base = tp->respRing.ringBase;
- int count, len, wrap_len;
- u32 cleared;
- u32 ready;
-
- cleared = le32_to_cpu(indexes->respCleared);
- ready = le32_to_cpu(indexes->respReady);
- while(cleared != ready) {
- resp = (struct resp_desc *)(base + cleared);
- count = resp->numDesc + 1;
- if(resp_save && resp->seqNo) {
- if(count > resp_size) {
- resp_save->flags = TYPHOON_RESP_ERROR;
- goto cleanup;
- }
-
- wrap_len = 0;
- len = count * sizeof(*resp);
- if(unlikely(cleared + len > RESPONSE_RING_SIZE)) {
- wrap_len = cleared + len - RESPONSE_RING_SIZE;
- len = RESPONSE_RING_SIZE - cleared;
- }
-
- memcpy(resp_save, resp, len);
- if(unlikely(wrap_len)) {
- resp_save += len / sizeof(*resp);
- memcpy(resp_save, base, wrap_len);
- }
-
- resp_save = NULL;
- } else if(resp->cmd == TYPHOON_CMD_READ_MEDIA_STATUS) {
- typhoon_media_status(tp->dev, resp);
- } else if(resp->cmd == TYPHOON_CMD_HELLO_RESP) {
- typhoon_hello(tp);
- } else {
- netdev_err(tp->dev,
- "dumping unexpected response 0x%04x:%d:0x%02x:0x%04x:%08x:%08x\n",
- le16_to_cpu(resp->cmd),
- resp->numDesc, resp->flags,
- le16_to_cpu(resp->parm1),
- le32_to_cpu(resp->parm2),
- le32_to_cpu(resp->parm3));
- }
-
-cleanup:
- typhoon_inc_resp_index(&cleared, count);
- }
-
- indexes->respCleared = cpu_to_le32(cleared);
- wmb();
- return resp_save == NULL;
-}
-
-static inline int
-typhoon_num_free(int lastWrite, int lastRead, int ringSize)
-{
- /* this works for all descriptors but rx_desc, as they are a
- * different size than the cmd_desc -- everyone else is the same
- */
- lastWrite /= sizeof(struct cmd_desc);
- lastRead /= sizeof(struct cmd_desc);
- return (ringSize + lastRead - lastWrite - 1) % ringSize;
-}
-
-static inline int
-typhoon_num_free_cmd(struct typhoon *tp)
-{
- int lastWrite = tp->cmdRing.lastWrite;
- int cmdCleared = le32_to_cpu(tp->indexes->cmdCleared);
-
- return typhoon_num_free(lastWrite, cmdCleared, COMMAND_ENTRIES);
-}
-
-static inline int
-typhoon_num_free_resp(struct typhoon *tp)
-{
- int respReady = le32_to_cpu(tp->indexes->respReady);
- int respCleared = le32_to_cpu(tp->indexes->respCleared);
-
- return typhoon_num_free(respReady, respCleared, RESPONSE_ENTRIES);
-}
-
-static inline int
-typhoon_num_free_tx(struct transmit_ring *ring)
-{
- /* if we start using the Hi Tx ring, this needs updating */
- return typhoon_num_free(ring->lastWrite, ring->lastRead, TXLO_ENTRIES);
-}
-
-static int
-typhoon_issue_command(struct typhoon *tp, int num_cmd, struct cmd_desc *cmd,
- int num_resp, struct resp_desc *resp)
-{
- struct typhoon_indexes *indexes = tp->indexes;
- struct basic_ring *ring = &tp->cmdRing;
- struct resp_desc local_resp;
- int i, err = 0;
- int got_resp;
- int freeCmd, freeResp;
- int len, wrap_len;
-
- spin_lock(&tp->command_lock);
-
- freeCmd = typhoon_num_free_cmd(tp);
- freeResp = typhoon_num_free_resp(tp);
-
- if(freeCmd < num_cmd || freeResp < num_resp) {
- netdev_err(tp->dev, "no descs for cmd, had (needed) %d (%d) cmd, %d (%d) resp\n",
- freeCmd, num_cmd, freeResp, num_resp);
- err = -ENOMEM;
- goto out;
- }
-
- if(cmd->flags & TYPHOON_CMD_RESPOND) {
- /* If we're expecting a response, but the caller hasn't given
- * us a place to put it, we'll provide one.
- */
- tp->awaiting_resp = 1;
- if(resp == NULL) {
- resp = &local_resp;
- num_resp = 1;
- }
- }
-
- wrap_len = 0;
- len = num_cmd * sizeof(*cmd);
- if(unlikely(ring->lastWrite + len > COMMAND_RING_SIZE)) {
- wrap_len = ring->lastWrite + len - COMMAND_RING_SIZE;
- len = COMMAND_RING_SIZE - ring->lastWrite;
- }
-
- memcpy(ring->ringBase + ring->lastWrite, cmd, len);
- if(unlikely(wrap_len)) {
- struct cmd_desc *wrap_ptr = cmd;
- wrap_ptr += len / sizeof(*cmd);
- memcpy(ring->ringBase, wrap_ptr, wrap_len);
- }
-
- typhoon_inc_cmd_index(&ring->lastWrite, num_cmd);
-
- /* "I feel a presence... another warrior is on the mesa."
- */
- wmb();
- iowrite32(ring->lastWrite, tp->ioaddr + TYPHOON_REG_CMD_READY);
- typhoon_post_pci_writes(tp->ioaddr);
-
- if((cmd->flags & TYPHOON_CMD_RESPOND) == 0)
- goto out;
-
- /* Ugh. We'll be here about 8ms, spinning our thumbs, unable to
- * preempt or do anything other than take interrupts. So, don't
- * wait for a response unless you have to.
- *
- * I've thought about trying to sleep here, but we're called
- * from many contexts that don't allow that. Also, given the way
- * 3Com has implemented irq coalescing, we would likely timeout --
- * this has been observed in real life!
- *
- * The big killer is we have to wait to get stats from the card,
- * though we could go to a periodic refresh of those if we don't
- * mind them getting somewhat stale. The rest of the waiting
- * commands occur during open/close/suspend/resume, so they aren't
- * time critical. Creating SAs in the future will also have to
- * wait here.
- */
- got_resp = 0;
- for(i = 0; i < TYPHOON_WAIT_TIMEOUT && !got_resp; i++) {
- if(indexes->respCleared != indexes->respReady)
- got_resp = typhoon_process_response(tp, num_resp,
- resp);
- udelay(TYPHOON_UDELAY);
- }
-
- if(!got_resp) {
- err = -ETIMEDOUT;
- goto out;
- }
-
- /* Collect the error response even if we don't care about the
- * rest of the response
- */
- if(resp->flags & TYPHOON_RESP_ERROR)
- err = -EIO;
-
-out:
- if(tp->awaiting_resp) {
- tp->awaiting_resp = 0;
- smp_wmb();
-
- /* Ugh. If a response was added to the ring between
- * the call to typhoon_process_response() and the clearing
- * of tp->awaiting_resp, we could have missed the interrupt
- * and it could hang in the ring an indeterminate amount of
- * time. So, check for it, and interrupt ourselves if this
- * is the case.
- */
- if(indexes->respCleared != indexes->respReady)
- iowrite32(1, tp->ioaddr + TYPHOON_REG_SELF_INTERRUPT);
- }
-
- spin_unlock(&tp->command_lock);
- return err;
-}
-
-static inline void
-typhoon_tso_fill(struct sk_buff *skb, struct transmit_ring *txRing,
- u32 ring_dma)
-{
- struct tcpopt_desc *tcpd;
- u32 tcpd_offset = ring_dma;
-
- tcpd = (struct tcpopt_desc *) (txRing->ringBase + txRing->lastWrite);
- tcpd_offset += txRing->lastWrite;
- tcpd_offset += offsetof(struct tcpopt_desc, bytesTx);
- typhoon_inc_tx_index(&txRing->lastWrite, 1);
-
- tcpd->flags = TYPHOON_OPT_DESC | TYPHOON_OPT_TCP_SEG;
- tcpd->numDesc = 1;
- tcpd->mss_flags = cpu_to_le16(skb_tso_size(skb));
- tcpd->mss_flags |= TYPHOON_TSO_FIRST | TYPHOON_TSO_LAST;
- tcpd->respAddrLo = cpu_to_le32(tcpd_offset);
- tcpd->bytesTx = cpu_to_le32(skb->len);
- tcpd->status = 0;
-}
-
-static netdev_tx_t
-typhoon_start_tx(struct sk_buff *skb, struct net_device *dev)
-{
- struct typhoon *tp = netdev_priv(dev);
- struct transmit_ring *txRing;
- struct tx_desc *txd, *first_txd;
- dma_addr_t skb_dma;
- int numDesc;
-
- /* we have two rings to choose from, but we only use txLo for now
- * If we start using the Hi ring as well, we'll need to update
- * typhoon_stop_runtime(), typhoon_interrupt(), typhoon_num_free_tx(),
- * and TXHI_ENTRIES to match, as well as update the TSO code below
- * to get the right DMA address
- */
- txRing = &tp->txLoRing;
-
- /* We need one descriptor for each fragment of the sk_buff, plus the
- * one for the ->data area of it.
- *
- * The docs say a maximum of 16 fragment descriptors per TCP option
- * descriptor, then make a new packet descriptor and option descriptor
- * for the next 16 fragments. The engineers say just an option
- * descriptor is needed. I've tested up to 26 fragments with a single
- * packet descriptor/option descriptor combo, so I use that for now.
- *
- * If problems develop with TSO, check this first.
- */
- numDesc = skb_shinfo(skb)->nr_frags + 1;
- if (skb_is_gso(skb))
- numDesc++;
-
- /* When checking for free space in the ring, we need to also
- * account for the initial Tx descriptor, and we always must leave
- * at least one descriptor unused in the ring so that it doesn't
- * wrap and look empty.
- *
- * The only time we should loop here is when we hit the race
- * between marking the queue awake and updating the cleared index.
- * Just loop and it will appear. This comes from the acenic driver.
- */
- while(unlikely(typhoon_num_free_tx(txRing) < (numDesc + 2)))
- smp_rmb();
-
- first_txd = (struct tx_desc *) (txRing->ringBase + txRing->lastWrite);
- typhoon_inc_tx_index(&txRing->lastWrite, 1);
-
- first_txd->flags = TYPHOON_TX_DESC | TYPHOON_DESC_VALID;
- first_txd->numDesc = 0;
- first_txd->len = 0;
- first_txd->tx_addr = (u64)((unsigned long) skb);
- first_txd->processFlags = 0;
-
- if(skb->ip_summed == CHECKSUM_PARTIAL) {
- /* The 3XP will figure out if this is UDP/TCP */
- first_txd->processFlags |= TYPHOON_TX_PF_TCP_CHKSUM;
- first_txd->processFlags |= TYPHOON_TX_PF_UDP_CHKSUM;
- first_txd->processFlags |= TYPHOON_TX_PF_IP_CHKSUM;
- }
-
- if(vlan_tx_tag_present(skb)) {
- first_txd->processFlags |=
- TYPHOON_TX_PF_INSERT_VLAN | TYPHOON_TX_PF_VLAN_PRIORITY;
- first_txd->processFlags |=
- cpu_to_le32(htons(vlan_tx_tag_get(skb)) <<
- TYPHOON_TX_PF_VLAN_TAG_SHIFT);
- }
-
- if (skb_is_gso(skb)) {
- first_txd->processFlags |= TYPHOON_TX_PF_TCP_SEGMENT;
- first_txd->numDesc++;
-
- typhoon_tso_fill(skb, txRing, tp->txlo_dma_addr);
- }
-
- txd = (struct tx_desc *) (txRing->ringBase + txRing->lastWrite);
- typhoon_inc_tx_index(&txRing->lastWrite, 1);
-
- /* No need to worry about padding packet -- the firmware pads
- * it with zeros to ETH_ZLEN for us.
- */
- if(skb_shinfo(skb)->nr_frags == 0) {
- skb_dma = pci_map_single(tp->tx_pdev, skb->data, skb->len,
- PCI_DMA_TODEVICE);
- txd->flags = TYPHOON_FRAG_DESC | TYPHOON_DESC_VALID;
- txd->len = cpu_to_le16(skb->len);
- txd->frag.addr = cpu_to_le32(skb_dma);
- txd->frag.addrHi = 0;
- first_txd->numDesc++;
- } else {
- int i, len;
-
- len = skb_headlen(skb);
- skb_dma = pci_map_single(tp->tx_pdev, skb->data, len,
- PCI_DMA_TODEVICE);
- txd->flags = TYPHOON_FRAG_DESC | TYPHOON_DESC_VALID;
- txd->len = cpu_to_le16(len);
- txd->frag.addr = cpu_to_le32(skb_dma);
- txd->frag.addrHi = 0;
- first_txd->numDesc++;
-
- for(i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
- skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- void *frag_addr;
-
- txd = (struct tx_desc *) (txRing->ringBase +
- txRing->lastWrite);
- typhoon_inc_tx_index(&txRing->lastWrite, 1);
-
- len = frag->size;
- frag_addr = (void *) page_address(frag->page) +
- frag->page_offset;
- skb_dma = pci_map_single(tp->tx_pdev, frag_addr, len,
- PCI_DMA_TODEVICE);
- txd->flags = TYPHOON_FRAG_DESC | TYPHOON_DESC_VALID;
- txd->len = cpu_to_le16(len);
- txd->frag.addr = cpu_to_le32(skb_dma);
- txd->frag.addrHi = 0;
- first_txd->numDesc++;
- }
- }
-
- /* Kick the 3XP
- */
- wmb();
- iowrite32(txRing->lastWrite, tp->tx_ioaddr + txRing->writeRegister);
-
- /* If we don't have room to put the worst case packet on the
- * queue, then we must stop the queue. We need 2 extra
- * descriptors -- one to prevent ring wrap, and one for the
- * Tx header.
- */
- numDesc = MAX_SKB_FRAGS + TSO_NUM_DESCRIPTORS + 1;
-
- if(typhoon_num_free_tx(txRing) < (numDesc + 2)) {
- netif_stop_queue(dev);
-
- /* A Tx complete IRQ could have gotten between, making
- * the ring free again. Only need to recheck here, since
- * Tx is serialized.
- */
- if(typhoon_num_free_tx(txRing) >= (numDesc + 2))
- netif_wake_queue(dev);
- }
-
- return NETDEV_TX_OK;
-}
-
-static void
-typhoon_set_rx_mode(struct net_device *dev)
-{
- struct typhoon *tp = netdev_priv(dev);
- struct cmd_desc xp_cmd;
- u32 mc_filter[2];
- __le16 filter;
-
- filter = TYPHOON_RX_FILTER_DIRECTED | TYPHOON_RX_FILTER_BROADCAST;
- if(dev->flags & IFF_PROMISC) {
- filter |= TYPHOON_RX_FILTER_PROMISCOUS;
- } else if ((netdev_mc_count(dev) > multicast_filter_limit) ||
- (dev->flags & IFF_ALLMULTI)) {
- /* Too many to match, or accept all multicasts. */
- filter |= TYPHOON_RX_FILTER_ALL_MCAST;
- } else if (!netdev_mc_empty(dev)) {
- struct netdev_hw_addr *ha;
-
- memset(mc_filter, 0, sizeof(mc_filter));
- netdev_for_each_mc_addr(ha, dev) {
- int bit = ether_crc(ETH_ALEN, ha->addr) & 0x3f;
- mc_filter[bit >> 5] |= 1 << (bit & 0x1f);
- }
-
- INIT_COMMAND_NO_RESPONSE(&xp_cmd,
- TYPHOON_CMD_SET_MULTICAST_HASH);
- xp_cmd.parm1 = TYPHOON_MCAST_HASH_SET;
- xp_cmd.parm2 = cpu_to_le32(mc_filter[0]);
- xp_cmd.parm3 = cpu_to_le32(mc_filter[1]);
- typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
-
- filter |= TYPHOON_RX_FILTER_MCAST_HASH;
- }
-
- INIT_COMMAND_WITH_RESPONSE(&xp_cmd, TYPHOON_CMD_SET_RX_FILTER);
- xp_cmd.parm1 = filter;
- typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
-}
-
-static int
-typhoon_do_get_stats(struct typhoon *tp)
-{
- struct net_device_stats *stats = &tp->stats;
- struct net_device_stats *saved = &tp->stats_saved;
- struct cmd_desc xp_cmd;
- struct resp_desc xp_resp[7];
- struct stats_resp *s = (struct stats_resp *) xp_resp;
- int err;
-
- INIT_COMMAND_WITH_RESPONSE(&xp_cmd, TYPHOON_CMD_READ_STATS);
- err = typhoon_issue_command(tp, 1, &xp_cmd, 7, xp_resp);
- if(err < 0)
- return err;
-
- /* 3Com's Linux driver uses txMultipleCollisions as it's
- * collisions value, but there is some other collision info as well...
- *
- * The extra status reported would be a good candidate for
- * ethtool_ops->get_{strings,stats}()
- */
- stats->tx_packets = le32_to_cpu(s->txPackets) +
- saved->tx_packets;
- stats->tx_bytes = le64_to_cpu(s->txBytes) +
- saved->tx_bytes;
- stats->tx_errors = le32_to_cpu(s->txCarrierLost) +
- saved->tx_errors;
- stats->tx_carrier_errors = le32_to_cpu(s->txCarrierLost) +
- saved->tx_carrier_errors;
- stats->collisions = le32_to_cpu(s->txMultipleCollisions) +
- saved->collisions;
- stats->rx_packets = le32_to_cpu(s->rxPacketsGood) +
- saved->rx_packets;
- stats->rx_bytes = le64_to_cpu(s->rxBytesGood) +
- saved->rx_bytes;
- stats->rx_fifo_errors = le32_to_cpu(s->rxFifoOverruns) +
- saved->rx_fifo_errors;
- stats->rx_errors = le32_to_cpu(s->rxFifoOverruns) +
- le32_to_cpu(s->BadSSD) + le32_to_cpu(s->rxCrcErrors) +
- saved->rx_errors;
- stats->rx_crc_errors = le32_to_cpu(s->rxCrcErrors) +
- saved->rx_crc_errors;
- stats->rx_length_errors = le32_to_cpu(s->rxOversized) +
- saved->rx_length_errors;
- tp->speed = (s->linkStatus & TYPHOON_LINK_100MBPS) ?
- SPEED_100 : SPEED_10;
- tp->duplex = (s->linkStatus & TYPHOON_LINK_FULL_DUPLEX) ?
- DUPLEX_FULL : DUPLEX_HALF;
-
- return 0;
-}
-
-static struct net_device_stats *
-typhoon_get_stats(struct net_device *dev)
-{
- struct typhoon *tp = netdev_priv(dev);
- struct net_device_stats *stats = &tp->stats;
- struct net_device_stats *saved = &tp->stats_saved;
-
- smp_rmb();
- if(tp->card_state == Sleeping)
- return saved;
-
- if(typhoon_do_get_stats(tp) < 0) {
- netdev_err(dev, "error getting stats\n");
- return saved;
- }
-
- return stats;
-}
-
-static int
-typhoon_set_mac_address(struct net_device *dev, void *addr)
-{
- struct sockaddr *saddr = (struct sockaddr *) addr;
-
- if(netif_running(dev))
- return -EBUSY;
-
- memcpy(dev->dev_addr, saddr->sa_data, dev->addr_len);
- return 0;
-}
-
-static void
-typhoon_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
-{
- struct typhoon *tp = netdev_priv(dev);
- struct pci_dev *pci_dev = tp->pdev;
- struct cmd_desc xp_cmd;
- struct resp_desc xp_resp[3];
-
- smp_rmb();
- if(tp->card_state == Sleeping) {
- strcpy(info->fw_version, "Sleep image");
- } else {
- INIT_COMMAND_WITH_RESPONSE(&xp_cmd, TYPHOON_CMD_READ_VERSIONS);
- if(typhoon_issue_command(tp, 1, &xp_cmd, 3, xp_resp) < 0) {
- strcpy(info->fw_version, "Unknown runtime");
- } else {
- u32 sleep_ver = le32_to_cpu(xp_resp[0].parm2);
- snprintf(info->fw_version, 32, "%02x.%03x.%03x",
- sleep_ver >> 24, (sleep_ver >> 12) & 0xfff,
- sleep_ver & 0xfff);
- }
- }
-
- strcpy(info->driver, KBUILD_MODNAME);
- strcpy(info->bus_info, pci_name(pci_dev));
-}
-
-static int
-typhoon_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
- struct typhoon *tp = netdev_priv(dev);
-
- cmd->supported = SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full |
- SUPPORTED_Autoneg;
-
- switch (tp->xcvr_select) {
- case TYPHOON_XCVR_10HALF:
- cmd->advertising = ADVERTISED_10baseT_Half;
- break;
- case TYPHOON_XCVR_10FULL:
- cmd->advertising = ADVERTISED_10baseT_Full;
- break;
- case TYPHOON_XCVR_100HALF:
- cmd->advertising = ADVERTISED_100baseT_Half;
- break;
- case TYPHOON_XCVR_100FULL:
- cmd->advertising = ADVERTISED_100baseT_Full;
- break;
- case TYPHOON_XCVR_AUTONEG:
- cmd->advertising = ADVERTISED_10baseT_Half |
- ADVERTISED_10baseT_Full |
- ADVERTISED_100baseT_Half |
- ADVERTISED_100baseT_Full |
- ADVERTISED_Autoneg;
- break;
- }
-
- if(tp->capabilities & TYPHOON_FIBER) {
- cmd->supported |= SUPPORTED_FIBRE;
- cmd->advertising |= ADVERTISED_FIBRE;
- cmd->port = PORT_FIBRE;
- } else {
- cmd->supported |= SUPPORTED_10baseT_Half |
- SUPPORTED_10baseT_Full |
- SUPPORTED_TP;
- cmd->advertising |= ADVERTISED_TP;
- cmd->port = PORT_TP;
- }
-
- /* need to get stats to make these link speed/duplex valid */
- typhoon_do_get_stats(tp);
- ethtool_cmd_speed_set(cmd, tp->speed);
- cmd->duplex = tp->duplex;
- cmd->phy_address = 0;
- cmd->transceiver = XCVR_INTERNAL;
- if(tp->xcvr_select == TYPHOON_XCVR_AUTONEG)
- cmd->autoneg = AUTONEG_ENABLE;
- else
- cmd->autoneg = AUTONEG_DISABLE;
- cmd->maxtxpkt = 1;
- cmd->maxrxpkt = 1;
-
- return 0;
-}
-
-static int
-typhoon_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
- struct typhoon *tp = netdev_priv(dev);
- u32 speed = ethtool_cmd_speed(cmd);
- struct cmd_desc xp_cmd;
- __le16 xcvr;
- int err;
-
- err = -EINVAL;
- if (cmd->autoneg == AUTONEG_ENABLE) {
- xcvr = TYPHOON_XCVR_AUTONEG;
- } else {
- if (cmd->duplex == DUPLEX_HALF) {
- if (speed == SPEED_10)
- xcvr = TYPHOON_XCVR_10HALF;
- else if (speed == SPEED_100)
- xcvr = TYPHOON_XCVR_100HALF;
- else
- goto out;
- } else if (cmd->duplex == DUPLEX_FULL) {
- if (speed == SPEED_10)
- xcvr = TYPHOON_XCVR_10FULL;
- else if (speed == SPEED_100)
- xcvr = TYPHOON_XCVR_100FULL;
- else
- goto out;
- } else
- goto out;
- }
-
- INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_XCVR_SELECT);
- xp_cmd.parm1 = xcvr;
- err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
- if(err < 0)
- goto out;
-
- tp->xcvr_select = xcvr;
- if(cmd->autoneg == AUTONEG_ENABLE) {
- tp->speed = 0xff; /* invalid */
- tp->duplex = 0xff; /* invalid */
- } else {
- tp->speed = speed;
- tp->duplex = cmd->duplex;
- }
-
-out:
- return err;
-}
-
-static void
-typhoon_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
-{
- struct typhoon *tp = netdev_priv(dev);
-
- wol->supported = WAKE_PHY | WAKE_MAGIC;
- wol->wolopts = 0;
- if(tp->wol_events & TYPHOON_WAKE_LINK_EVENT)
- wol->wolopts |= WAKE_PHY;
- if(tp->wol_events & TYPHOON_WAKE_MAGIC_PKT)
- wol->wolopts |= WAKE_MAGIC;
- memset(&wol->sopass, 0, sizeof(wol->sopass));
-}
-
-static int
-typhoon_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
-{
- struct typhoon *tp = netdev_priv(dev);
-
- if(wol->wolopts & ~(WAKE_PHY | WAKE_MAGIC))
- return -EINVAL;
-
- tp->wol_events = 0;
- if(wol->wolopts & WAKE_PHY)
- tp->wol_events |= TYPHOON_WAKE_LINK_EVENT;
- if(wol->wolopts & WAKE_MAGIC)
- tp->wol_events |= TYPHOON_WAKE_MAGIC_PKT;
-
- return 0;
-}
-
-static void
-typhoon_get_ringparam(struct net_device *dev, struct ethtool_ringparam *ering)
-{
- ering->rx_max_pending = RXENT_ENTRIES;
- ering->rx_mini_max_pending = 0;
- ering->rx_jumbo_max_pending = 0;
- ering->tx_max_pending = TXLO_ENTRIES - 1;
-
- ering->rx_pending = RXENT_ENTRIES;
- ering->rx_mini_pending = 0;
- ering->rx_jumbo_pending = 0;
- ering->tx_pending = TXLO_ENTRIES - 1;
-}
-
-static const struct ethtool_ops typhoon_ethtool_ops = {
- .get_settings = typhoon_get_settings,
- .set_settings = typhoon_set_settings,
- .get_drvinfo = typhoon_get_drvinfo,
- .get_wol = typhoon_get_wol,
- .set_wol = typhoon_set_wol,
- .get_link = ethtool_op_get_link,
- .get_ringparam = typhoon_get_ringparam,
-};
-
-static int
-typhoon_wait_interrupt(void __iomem *ioaddr)
-{
- int i, err = 0;
-
- for(i = 0; i < TYPHOON_WAIT_TIMEOUT; i++) {
- if(ioread32(ioaddr + TYPHOON_REG_INTR_STATUS) &
- TYPHOON_INTR_BOOTCMD)
- goto out;
- udelay(TYPHOON_UDELAY);
- }
-
- err = -ETIMEDOUT;
-
-out:
- iowrite32(TYPHOON_INTR_BOOTCMD, ioaddr + TYPHOON_REG_INTR_STATUS);
- return err;
-}
-
-#define shared_offset(x) offsetof(struct typhoon_shared, x)
-
-static void
-typhoon_init_interface(struct typhoon *tp)
-{
- struct typhoon_interface *iface = &tp->shared->iface;
- dma_addr_t shared_dma;
-
- memset(tp->shared, 0, sizeof(struct typhoon_shared));
-
- /* The *Hi members of iface are all init'd to zero by the memset().
- */
- shared_dma = tp->shared_dma + shared_offset(indexes);
- iface->ringIndex = cpu_to_le32(shared_dma);
-
- shared_dma = tp->shared_dma + shared_offset(txLo);
- iface->txLoAddr = cpu_to_le32(shared_dma);
- iface->txLoSize = cpu_to_le32(TXLO_ENTRIES * sizeof(struct tx_desc));
-
- shared_dma = tp->shared_dma + shared_offset(txHi);
- iface->txHiAddr = cpu_to_le32(shared_dma);
- iface->txHiSize = cpu_to_le32(TXHI_ENTRIES * sizeof(struct tx_desc));
-
- shared_dma = tp->shared_dma + shared_offset(rxBuff);
- iface->rxBuffAddr = cpu_to_le32(shared_dma);
- iface->rxBuffSize = cpu_to_le32(RXFREE_ENTRIES *
- sizeof(struct rx_free));
-
- shared_dma = tp->shared_dma + shared_offset(rxLo);
- iface->rxLoAddr = cpu_to_le32(shared_dma);
- iface->rxLoSize = cpu_to_le32(RX_ENTRIES * sizeof(struct rx_desc));
-
- shared_dma = tp->shared_dma + shared_offset(rxHi);
- iface->rxHiAddr = cpu_to_le32(shared_dma);
- iface->rxHiSize = cpu_to_le32(RX_ENTRIES * sizeof(struct rx_desc));
-
- shared_dma = tp->shared_dma + shared_offset(cmd);
- iface->cmdAddr = cpu_to_le32(shared_dma);
- iface->cmdSize = cpu_to_le32(COMMAND_RING_SIZE);
-
- shared_dma = tp->shared_dma + shared_offset(resp);
- iface->respAddr = cpu_to_le32(shared_dma);
- iface->respSize = cpu_to_le32(RESPONSE_RING_SIZE);
-
- shared_dma = tp->shared_dma + shared_offset(zeroWord);
- iface->zeroAddr = cpu_to_le32(shared_dma);
-
- tp->indexes = &tp->shared->indexes;
- tp->txLoRing.ringBase = (u8 *) tp->shared->txLo;
- tp->txHiRing.ringBase = (u8 *) tp->shared->txHi;
- tp->rxLoRing.ringBase = (u8 *) tp->shared->rxLo;
- tp->rxHiRing.ringBase = (u8 *) tp->shared->rxHi;
- tp->rxBuffRing.ringBase = (u8 *) tp->shared->rxBuff;
- tp->cmdRing.ringBase = (u8 *) tp->shared->cmd;
- tp->respRing.ringBase = (u8 *) tp->shared->resp;
-
- tp->txLoRing.writeRegister = TYPHOON_REG_TX_LO_READY;
- tp->txHiRing.writeRegister = TYPHOON_REG_TX_HI_READY;
-
- tp->txlo_dma_addr = le32_to_cpu(iface->txLoAddr);
- tp->card_state = Sleeping;
-
- tp->offload = TYPHOON_OFFLOAD_IP_CHKSUM | TYPHOON_OFFLOAD_TCP_CHKSUM;
- tp->offload |= TYPHOON_OFFLOAD_UDP_CHKSUM | TSO_OFFLOAD_ON;
- tp->offload |= TYPHOON_OFFLOAD_VLAN;
-
- spin_lock_init(&tp->command_lock);
-
- /* Force the writes to the shared memory area out before continuing. */
- wmb();
-}
-
-static void
-typhoon_init_rings(struct typhoon *tp)
-{
- memset(tp->indexes, 0, sizeof(struct typhoon_indexes));
-
- tp->txLoRing.lastWrite = 0;
- tp->txHiRing.lastWrite = 0;
- tp->rxLoRing.lastWrite = 0;
- tp->rxHiRing.lastWrite = 0;
- tp->rxBuffRing.lastWrite = 0;
- tp->cmdRing.lastWrite = 0;
- tp->respRing.lastWrite = 0;
-
- tp->txLoRing.lastRead = 0;
- tp->txHiRing.lastRead = 0;
-}
-
-static const struct firmware *typhoon_fw;
-
-static int
-typhoon_request_firmware(struct typhoon *tp)
-{
- const struct typhoon_file_header *fHdr;
- const struct typhoon_section_header *sHdr;
- const u8 *image_data;
- u32 numSections;
- u32 section_len;
- u32 remaining;
- int err;
-
- if (typhoon_fw)
- return 0;
-
- err = request_firmware(&typhoon_fw, FIRMWARE_NAME, &tp->pdev->dev);
- if (err) {
- netdev_err(tp->dev, "Failed to load firmware \"%s\"\n",
- FIRMWARE_NAME);
- return err;
- }
-
- image_data = (u8 *) typhoon_fw->data;
- remaining = typhoon_fw->size;
- if (remaining < sizeof(struct typhoon_file_header))
- goto invalid_fw;
-
- fHdr = (struct typhoon_file_header *) image_data;
- if (memcmp(fHdr->tag, "TYPHOON", 8))
- goto invalid_fw;
-
- numSections = le32_to_cpu(fHdr->numSections);
- image_data += sizeof(struct typhoon_file_header);
- remaining -= sizeof(struct typhoon_file_header);
-
- while (numSections--) {
- if (remaining < sizeof(struct typhoon_section_header))
- goto invalid_fw;
-
- sHdr = (struct typhoon_section_header *) image_data;
- image_data += sizeof(struct typhoon_section_header);
- section_len = le32_to_cpu(sHdr->len);
-
- if (remaining < section_len)
- goto invalid_fw;
-
- image_data += section_len;
- remaining -= section_len;
- }
-
- return 0;
-
-invalid_fw:
- netdev_err(tp->dev, "Invalid firmware image\n");
- release_firmware(typhoon_fw);
- typhoon_fw = NULL;
- return -EINVAL;
-}
-
-static int
-typhoon_download_firmware(struct typhoon *tp)
-{
- void __iomem *ioaddr = tp->ioaddr;
- struct pci_dev *pdev = tp->pdev;
- const struct typhoon_file_header *fHdr;
- const struct typhoon_section_header *sHdr;
- const u8 *image_data;
- void *dpage;
- dma_addr_t dpage_dma;
- __sum16 csum;
- u32 irqEnabled;
- u32 irqMasked;
- u32 numSections;
- u32 section_len;
- u32 len;
- u32 load_addr;
- u32 hmac;
- int i;
- int err;
-
- image_data = (u8 *) typhoon_fw->data;
- fHdr = (struct typhoon_file_header *) image_data;
-
- /* Cannot just map the firmware image using pci_map_single() as
- * the firmware is vmalloc()'d and may not be physically contiguous,
- * so we allocate some consistent memory to copy the sections into.
- */
- err = -ENOMEM;
- dpage = pci_alloc_consistent(pdev, PAGE_SIZE, &dpage_dma);
- if(!dpage) {
- netdev_err(tp->dev, "no DMA mem for firmware\n");
- goto err_out;
- }
-
- irqEnabled = ioread32(ioaddr + TYPHOON_REG_INTR_ENABLE);
- iowrite32(irqEnabled | TYPHOON_INTR_BOOTCMD,
- ioaddr + TYPHOON_REG_INTR_ENABLE);
- irqMasked = ioread32(ioaddr + TYPHOON_REG_INTR_MASK);
- iowrite32(irqMasked | TYPHOON_INTR_BOOTCMD,
- ioaddr + TYPHOON_REG_INTR_MASK);
-
- err = -ETIMEDOUT;
- if(typhoon_wait_status(ioaddr, TYPHOON_STATUS_WAITING_FOR_HOST) < 0) {
- netdev_err(tp->dev, "card ready timeout\n");
- goto err_out_irq;
- }
-
- numSections = le32_to_cpu(fHdr->numSections);
- load_addr = le32_to_cpu(fHdr->startAddr);
-
- iowrite32(TYPHOON_INTR_BOOTCMD, ioaddr + TYPHOON_REG_INTR_STATUS);
- iowrite32(load_addr, ioaddr + TYPHOON_REG_DOWNLOAD_BOOT_ADDR);
- hmac = le32_to_cpu(fHdr->hmacDigest[0]);
- iowrite32(hmac, ioaddr + TYPHOON_REG_DOWNLOAD_HMAC_0);
- hmac = le32_to_cpu(fHdr->hmacDigest[1]);
- iowrite32(hmac, ioaddr + TYPHOON_REG_DOWNLOAD_HMAC_1);
- hmac = le32_to_cpu(fHdr->hmacDigest[2]);
- iowrite32(hmac, ioaddr + TYPHOON_REG_DOWNLOAD_HMAC_2);
- hmac = le32_to_cpu(fHdr->hmacDigest[3]);
- iowrite32(hmac, ioaddr + TYPHOON_REG_DOWNLOAD_HMAC_3);
- hmac = le32_to_cpu(fHdr->hmacDigest[4]);
- iowrite32(hmac, ioaddr + TYPHOON_REG_DOWNLOAD_HMAC_4);
- typhoon_post_pci_writes(ioaddr);
- iowrite32(TYPHOON_BOOTCMD_RUNTIME_IMAGE, ioaddr + TYPHOON_REG_COMMAND);
-
- image_data += sizeof(struct typhoon_file_header);
-
- /* The ioread32() in typhoon_wait_interrupt() will force the
- * last write to the command register to post, so
- * we don't need a typhoon_post_pci_writes() after it.
- */
- for(i = 0; i < numSections; i++) {
- sHdr = (struct typhoon_section_header *) image_data;
- image_data += sizeof(struct typhoon_section_header);
- load_addr = le32_to_cpu(sHdr->startAddr);
- section_len = le32_to_cpu(sHdr->len);
-
- while(section_len) {
- len = min_t(u32, section_len, PAGE_SIZE);
-
- if(typhoon_wait_interrupt(ioaddr) < 0 ||
- ioread32(ioaddr + TYPHOON_REG_STATUS) !=
- TYPHOON_STATUS_WAITING_FOR_SEGMENT) {
- netdev_err(tp->dev, "segment ready timeout\n");
- goto err_out_irq;
- }
-
- /* Do an pseudo IPv4 checksum on the data -- first
- * need to convert each u16 to cpu order before
- * summing. Fortunately, due to the properties of
- * the checksum, we can do this once, at the end.
- */
- csum = csum_fold(csum_partial_copy_nocheck(image_data,
- dpage, len,
- 0));
-
- iowrite32(len, ioaddr + TYPHOON_REG_BOOT_LENGTH);
- iowrite32(le16_to_cpu((__force __le16)csum),
- ioaddr + TYPHOON_REG_BOOT_CHECKSUM);
- iowrite32(load_addr,
- ioaddr + TYPHOON_REG_BOOT_DEST_ADDR);
- iowrite32(0, ioaddr + TYPHOON_REG_BOOT_DATA_HI);
- iowrite32(dpage_dma, ioaddr + TYPHOON_REG_BOOT_DATA_LO);
- typhoon_post_pci_writes(ioaddr);
- iowrite32(TYPHOON_BOOTCMD_SEG_AVAILABLE,
- ioaddr + TYPHOON_REG_COMMAND);
-
- image_data += len;
- load_addr += len;
- section_len -= len;
- }
- }
-
- if(typhoon_wait_interrupt(ioaddr) < 0 ||
- ioread32(ioaddr + TYPHOON_REG_STATUS) !=
- TYPHOON_STATUS_WAITING_FOR_SEGMENT) {
- netdev_err(tp->dev, "final segment ready timeout\n");
- goto err_out_irq;
- }
-
- iowrite32(TYPHOON_BOOTCMD_DNLD_COMPLETE, ioaddr + TYPHOON_REG_COMMAND);
-
- if(typhoon_wait_status(ioaddr, TYPHOON_STATUS_WAITING_FOR_BOOT) < 0) {
- netdev_err(tp->dev, "boot ready timeout, status 0x%0x\n",
- ioread32(ioaddr + TYPHOON_REG_STATUS));
- goto err_out_irq;
- }
-
- err = 0;
-
-err_out_irq:
- iowrite32(irqMasked, ioaddr + TYPHOON_REG_INTR_MASK);
- iowrite32(irqEnabled, ioaddr + TYPHOON_REG_INTR_ENABLE);
-
- pci_free_consistent(pdev, PAGE_SIZE, dpage, dpage_dma);
-
-err_out:
- return err;
-}
-
-static int
-typhoon_boot_3XP(struct typhoon *tp, u32 initial_status)
-{
- void __iomem *ioaddr = tp->ioaddr;
-
- if(typhoon_wait_status(ioaddr, initial_status) < 0) {
- netdev_err(tp->dev, "boot ready timeout\n");
- goto out_timeout;
- }
-
- iowrite32(0, ioaddr + TYPHOON_REG_BOOT_RECORD_ADDR_HI);
- iowrite32(tp->shared_dma, ioaddr + TYPHOON_REG_BOOT_RECORD_ADDR_LO);
- typhoon_post_pci_writes(ioaddr);
- iowrite32(TYPHOON_BOOTCMD_REG_BOOT_RECORD,
- ioaddr + TYPHOON_REG_COMMAND);
-
- if(typhoon_wait_status(ioaddr, TYPHOON_STATUS_RUNNING) < 0) {
- netdev_err(tp->dev, "boot finish timeout (status 0x%x)\n",
- ioread32(ioaddr + TYPHOON_REG_STATUS));
- goto out_timeout;
- }
-
- /* Clear the Transmit and Command ready registers
- */
- iowrite32(0, ioaddr + TYPHOON_REG_TX_HI_READY);
- iowrite32(0, ioaddr + TYPHOON_REG_CMD_READY);
- iowrite32(0, ioaddr + TYPHOON_REG_TX_LO_READY);
- typhoon_post_pci_writes(ioaddr);
- iowrite32(TYPHOON_BOOTCMD_BOOT, ioaddr + TYPHOON_REG_COMMAND);
-
- return 0;
-
-out_timeout:
- return -ETIMEDOUT;
-}
-
-static u32
-typhoon_clean_tx(struct typhoon *tp, struct transmit_ring *txRing,
- volatile __le32 * index)
-{
- u32 lastRead = txRing->lastRead;
- struct tx_desc *tx;
- dma_addr_t skb_dma;
- int dma_len;
- int type;
-
- while(lastRead != le32_to_cpu(*index)) {
- tx = (struct tx_desc *) (txRing->ringBase + lastRead);
- type = tx->flags & TYPHOON_TYPE_MASK;
-
- if(type == TYPHOON_TX_DESC) {
- /* This tx_desc describes a packet.
- */
- unsigned long ptr = tx->tx_addr;
- struct sk_buff *skb = (struct sk_buff *) ptr;
- dev_kfree_skb_irq(skb);
- } else if(type == TYPHOON_FRAG_DESC) {
- /* This tx_desc describes a memory mapping. Free it.
- */
- skb_dma = (dma_addr_t) le32_to_cpu(tx->frag.addr);
- dma_len = le16_to_cpu(tx->len);
- pci_unmap_single(tp->pdev, skb_dma, dma_len,
- PCI_DMA_TODEVICE);
- }
-
- tx->flags = 0;
- typhoon_inc_tx_index(&lastRead, 1);
- }
-
- return lastRead;
-}
-
-static void
-typhoon_tx_complete(struct typhoon *tp, struct transmit_ring *txRing,
- volatile __le32 * index)
-{
- u32 lastRead;
- int numDesc = MAX_SKB_FRAGS + 1;
-
- /* This will need changing if we start to use the Hi Tx ring. */
- lastRead = typhoon_clean_tx(tp, txRing, index);
- if(netif_queue_stopped(tp->dev) && typhoon_num_free(txRing->lastWrite,
- lastRead, TXLO_ENTRIES) > (numDesc + 2))
- netif_wake_queue(tp->dev);
-
- txRing->lastRead = lastRead;
- smp_wmb();
-}
-
-static void
-typhoon_recycle_rx_skb(struct typhoon *tp, u32 idx)
-{
- struct typhoon_indexes *indexes = tp->indexes;
- struct rxbuff_ent *rxb = &tp->rxbuffers[idx];
- struct basic_ring *ring = &tp->rxBuffRing;
- struct rx_free *r;
-
- if((ring->lastWrite + sizeof(*r)) % (RXFREE_ENTRIES * sizeof(*r)) ==
- le32_to_cpu(indexes->rxBuffCleared)) {
- /* no room in ring, just drop the skb
- */
- dev_kfree_skb_any(rxb->skb);
- rxb->skb = NULL;
- return;
- }
-
- r = (struct rx_free *) (ring->ringBase + ring->lastWrite);
- typhoon_inc_rxfree_index(&ring->lastWrite, 1);
- r->virtAddr = idx;
- r->physAddr = cpu_to_le32(rxb->dma_addr);
-
- /* Tell the card about it */
- wmb();
- indexes->rxBuffReady = cpu_to_le32(ring->lastWrite);
-}
-
-static int
-typhoon_alloc_rx_skb(struct typhoon *tp, u32 idx)
-{
- struct typhoon_indexes *indexes = tp->indexes;
- struct rxbuff_ent *rxb = &tp->rxbuffers[idx];
- struct basic_ring *ring = &tp->rxBuffRing;
- struct rx_free *r;
- struct sk_buff *skb;
- dma_addr_t dma_addr;
-
- rxb->skb = NULL;
-
- if((ring->lastWrite + sizeof(*r)) % (RXFREE_ENTRIES * sizeof(*r)) ==
- le32_to_cpu(indexes->rxBuffCleared))
- return -ENOMEM;
-
- skb = dev_alloc_skb(PKT_BUF_SZ);
- if(!skb)
- return -ENOMEM;
-
-#if 0
- /* Please, 3com, fix the firmware to allow DMA to a unaligned
- * address! Pretty please?
- */
- skb_reserve(skb, 2);
-#endif
-
- skb->dev = tp->dev;
- dma_addr = pci_map_single(tp->pdev, skb->data,
- PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
-
- /* Since no card does 64 bit DAC, the high bits will never
- * change from zero.
- */
- r = (struct rx_free *) (ring->ringBase + ring->lastWrite);
- typhoon_inc_rxfree_index(&ring->lastWrite, 1);
- r->virtAddr = idx;
- r->physAddr = cpu_to_le32(dma_addr);
- rxb->skb = skb;
- rxb->dma_addr = dma_addr;
-
- /* Tell the card about it */
- wmb();
- indexes->rxBuffReady = cpu_to_le32(ring->lastWrite);
- return 0;
-}
-
-static int
-typhoon_rx(struct typhoon *tp, struct basic_ring *rxRing, volatile __le32 * ready,
- volatile __le32 * cleared, int budget)
-{
- struct rx_desc *rx;
- struct sk_buff *skb, *new_skb;
- struct rxbuff_ent *rxb;
- dma_addr_t dma_addr;
- u32 local_ready;
- u32 rxaddr;
- int pkt_len;
- u32 idx;
- __le32 csum_bits;
- int received;
-
- received = 0;
- local_ready = le32_to_cpu(*ready);
- rxaddr = le32_to_cpu(*cleared);
- while(rxaddr != local_ready && budget > 0) {
- rx = (struct rx_desc *) (rxRing->ringBase + rxaddr);
- idx = rx->addr;
- rxb = &tp->rxbuffers[idx];
- skb = rxb->skb;
- dma_addr = rxb->dma_addr;
-
- typhoon_inc_rx_index(&rxaddr, 1);
-
- if(rx->flags & TYPHOON_RX_ERROR) {
- typhoon_recycle_rx_skb(tp, idx);
- continue;
- }
-
- pkt_len = le16_to_cpu(rx->frameLen);
-
- if(pkt_len < rx_copybreak &&
- (new_skb = dev_alloc_skb(pkt_len + 2)) != NULL) {
- skb_reserve(new_skb, 2);
- pci_dma_sync_single_for_cpu(tp->pdev, dma_addr,
- PKT_BUF_SZ,
- PCI_DMA_FROMDEVICE);
- skb_copy_to_linear_data(new_skb, skb->data, pkt_len);
- pci_dma_sync_single_for_device(tp->pdev, dma_addr,
- PKT_BUF_SZ,
- PCI_DMA_FROMDEVICE);
- skb_put(new_skb, pkt_len);
- typhoon_recycle_rx_skb(tp, idx);
- } else {
- new_skb = skb;
- skb_put(new_skb, pkt_len);
- pci_unmap_single(tp->pdev, dma_addr, PKT_BUF_SZ,
- PCI_DMA_FROMDEVICE);
- typhoon_alloc_rx_skb(tp, idx);
- }
- new_skb->protocol = eth_type_trans(new_skb, tp->dev);
- csum_bits = rx->rxStatus & (TYPHOON_RX_IP_CHK_GOOD |
- TYPHOON_RX_UDP_CHK_GOOD | TYPHOON_RX_TCP_CHK_GOOD);
- if(csum_bits ==
- (TYPHOON_RX_IP_CHK_GOOD | TYPHOON_RX_TCP_CHK_GOOD) ||
- csum_bits ==
- (TYPHOON_RX_IP_CHK_GOOD | TYPHOON_RX_UDP_CHK_GOOD)) {
- new_skb->ip_summed = CHECKSUM_UNNECESSARY;
- } else
- skb_checksum_none_assert(new_skb);
-
- if (rx->rxStatus & TYPHOON_RX_VLAN)
- __vlan_hwaccel_put_tag(new_skb,
- ntohl(rx->vlanTag) & 0xffff);
- netif_receive_skb(new_skb);
-
- received++;
- budget--;
- }
- *cleared = cpu_to_le32(rxaddr);
-
- return received;
-}
-
-static void
-typhoon_fill_free_ring(struct typhoon *tp)
-{
- u32 i;
-
- for(i = 0; i < RXENT_ENTRIES; i++) {
- struct rxbuff_ent *rxb = &tp->rxbuffers[i];
- if(rxb->skb)
- continue;
- if(typhoon_alloc_rx_skb(tp, i) < 0)
- break;
- }
-}
-
-static int
-typhoon_poll(struct napi_struct *napi, int budget)
-{
- struct typhoon *tp = container_of(napi, struct typhoon, napi);
- struct typhoon_indexes *indexes = tp->indexes;
- int work_done;
-
- rmb();
- if(!tp->awaiting_resp && indexes->respReady != indexes->respCleared)
- typhoon_process_response(tp, 0, NULL);
-
- if(le32_to_cpu(indexes->txLoCleared) != tp->txLoRing.lastRead)
- typhoon_tx_complete(tp, &tp->txLoRing, &indexes->txLoCleared);
-
- work_done = 0;
-
- if(indexes->rxHiCleared != indexes->rxHiReady) {
- work_done += typhoon_rx(tp, &tp->rxHiRing, &indexes->rxHiReady,
- &indexes->rxHiCleared, budget);
- }
-
- if(indexes->rxLoCleared != indexes->rxLoReady) {
- work_done += typhoon_rx(tp, &tp->rxLoRing, &indexes->rxLoReady,
- &indexes->rxLoCleared, budget - work_done);
- }
-
- if(le32_to_cpu(indexes->rxBuffCleared) == tp->rxBuffRing.lastWrite) {
- /* rxBuff ring is empty, try to fill it. */
- typhoon_fill_free_ring(tp);
- }
-
- if (work_done < budget) {
- napi_complete(napi);
- iowrite32(TYPHOON_INTR_NONE,
- tp->ioaddr + TYPHOON_REG_INTR_MASK);
- typhoon_post_pci_writes(tp->ioaddr);
- }
-
- return work_done;
-}
-
-static irqreturn_t
-typhoon_interrupt(int irq, void *dev_instance)
-{
- struct net_device *dev = dev_instance;
- struct typhoon *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->ioaddr;
- u32 intr_status;
-
- intr_status = ioread32(ioaddr + TYPHOON_REG_INTR_STATUS);
- if(!(intr_status & TYPHOON_INTR_HOST_INT))
- return IRQ_NONE;
-
- iowrite32(intr_status, ioaddr + TYPHOON_REG_INTR_STATUS);
-
- if (napi_schedule_prep(&tp->napi)) {
- iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_MASK);
- typhoon_post_pci_writes(ioaddr);
- __napi_schedule(&tp->napi);
- } else {
- netdev_err(dev, "Error, poll already scheduled\n");
- }
- return IRQ_HANDLED;
-}
-
-static void
-typhoon_free_rx_rings(struct typhoon *tp)
-{
- u32 i;
-
- for(i = 0; i < RXENT_ENTRIES; i++) {
- struct rxbuff_ent *rxb = &tp->rxbuffers[i];
- if(rxb->skb) {
- pci_unmap_single(tp->pdev, rxb->dma_addr, PKT_BUF_SZ,
- PCI_DMA_FROMDEVICE);
- dev_kfree_skb(rxb->skb);
- rxb->skb = NULL;
- }
- }
-}
-
-static int
-typhoon_sleep(struct typhoon *tp, pci_power_t state, __le16 events)
-{
- struct pci_dev *pdev = tp->pdev;
- void __iomem *ioaddr = tp->ioaddr;
- struct cmd_desc xp_cmd;
- int err;
-
- INIT_COMMAND_WITH_RESPONSE(&xp_cmd, TYPHOON_CMD_ENABLE_WAKE_EVENTS);
- xp_cmd.parm1 = events;
- err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
- if(err < 0) {
- netdev_err(tp->dev, "typhoon_sleep(): wake events cmd err %d\n",
- err);
- return err;
- }
-
- INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_GOTO_SLEEP);
- err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
- if(err < 0) {
- netdev_err(tp->dev, "typhoon_sleep(): sleep cmd err %d\n", err);
- return err;
- }
-
- if(typhoon_wait_status(ioaddr, TYPHOON_STATUS_SLEEPING) < 0)
- return -ETIMEDOUT;
-
- /* Since we cannot monitor the status of the link while sleeping,
- * tell the world it went away.
- */
- netif_carrier_off(tp->dev);
-
- pci_enable_wake(tp->pdev, state, 1);
- pci_disable_device(pdev);
- return pci_set_power_state(pdev, state);
-}
-
-static int
-typhoon_wakeup(struct typhoon *tp, int wait_type)
-{
- struct pci_dev *pdev = tp->pdev;
- void __iomem *ioaddr = tp->ioaddr;
-
- pci_set_power_state(pdev, PCI_D0);
- pci_restore_state(pdev);
-
- /* Post 2.x.x versions of the Sleep Image require a reset before
- * we can download the Runtime Image. But let's not make users of
- * the old firmware pay for the reset.
- */
- iowrite32(TYPHOON_BOOTCMD_WAKEUP, ioaddr + TYPHOON_REG_COMMAND);
- if(typhoon_wait_status(ioaddr, TYPHOON_STATUS_WAITING_FOR_HOST) < 0 ||
- (tp->capabilities & TYPHOON_WAKEUP_NEEDS_RESET))
- return typhoon_reset(ioaddr, wait_type);
-
- return 0;
-}
-
-static int
-typhoon_start_runtime(struct typhoon *tp)
-{
- struct net_device *dev = tp->dev;
- void __iomem *ioaddr = tp->ioaddr;
- struct cmd_desc xp_cmd;
- int err;
-
- typhoon_init_rings(tp);
- typhoon_fill_free_ring(tp);
-
- err = typhoon_download_firmware(tp);
- if(err < 0) {
- netdev_err(tp->dev, "cannot load runtime on 3XP\n");
- goto error_out;
- }
-
- if(typhoon_boot_3XP(tp, TYPHOON_STATUS_WAITING_FOR_BOOT) < 0) {
- netdev_err(tp->dev, "cannot boot 3XP\n");
- err = -EIO;
- goto error_out;
- }
-
- INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_SET_MAX_PKT_SIZE);
- xp_cmd.parm1 = cpu_to_le16(PKT_BUF_SZ);
- err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
- if(err < 0)
- goto error_out;
-
- INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_SET_MAC_ADDRESS);
- xp_cmd.parm1 = cpu_to_le16(ntohs(*(__be16 *)&dev->dev_addr[0]));
- xp_cmd.parm2 = cpu_to_le32(ntohl(*(__be32 *)&dev->dev_addr[2]));
- err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
- if(err < 0)
- goto error_out;
-
- /* Disable IRQ coalescing -- we can reenable it when 3Com gives
- * us some more information on how to control it.
- */
- INIT_COMMAND_WITH_RESPONSE(&xp_cmd, TYPHOON_CMD_IRQ_COALESCE_CTRL);
- xp_cmd.parm1 = 0;
- err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
- if(err < 0)
- goto error_out;
-
- INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_XCVR_SELECT);
- xp_cmd.parm1 = tp->xcvr_select;
- err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
- if(err < 0)
- goto error_out;
-
- INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_VLAN_TYPE_WRITE);
- xp_cmd.parm1 = cpu_to_le16(ETH_P_8021Q);
- err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
- if(err < 0)
- goto error_out;
-
- INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_SET_OFFLOAD_TASKS);
- xp_cmd.parm2 = tp->offload;
- xp_cmd.parm3 = tp->offload;
- err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
- if(err < 0)
- goto error_out;
-
- typhoon_set_rx_mode(dev);
-
- INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_TX_ENABLE);
- err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
- if(err < 0)
- goto error_out;
-
- INIT_COMMAND_WITH_RESPONSE(&xp_cmd, TYPHOON_CMD_RX_ENABLE);
- err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
- if(err < 0)
- goto error_out;
-
- tp->card_state = Running;
- smp_wmb();
-
- iowrite32(TYPHOON_INTR_ENABLE_ALL, ioaddr + TYPHOON_REG_INTR_ENABLE);
- iowrite32(TYPHOON_INTR_NONE, ioaddr + TYPHOON_REG_INTR_MASK);
- typhoon_post_pci_writes(ioaddr);
-
- return 0;
-
-error_out:
- typhoon_reset(ioaddr, WaitNoSleep);
- typhoon_free_rx_rings(tp);
- typhoon_init_rings(tp);
- return err;
-}
-
-static int
-typhoon_stop_runtime(struct typhoon *tp, int wait_type)
-{
- struct typhoon_indexes *indexes = tp->indexes;
- struct transmit_ring *txLo = &tp->txLoRing;
- void __iomem *ioaddr = tp->ioaddr;
- struct cmd_desc xp_cmd;
- int i;
-
- /* Disable interrupts early, since we can't schedule a poll
- * when called with !netif_running(). This will be posted
- * when we force the posting of the command.
- */
- iowrite32(TYPHOON_INTR_NONE, ioaddr + TYPHOON_REG_INTR_ENABLE);
-
- INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_RX_DISABLE);
- typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
-
- /* Wait 1/2 sec for any outstanding transmits to occur
- * We'll cleanup after the reset if this times out.
- */
- for(i = 0; i < TYPHOON_WAIT_TIMEOUT; i++) {
- if(indexes->txLoCleared == cpu_to_le32(txLo->lastWrite))
- break;
- udelay(TYPHOON_UDELAY);
- }
-
- if(i == TYPHOON_WAIT_TIMEOUT)
- netdev_err(tp->dev, "halt timed out waiting for Tx to complete\n");
-
- INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_TX_DISABLE);
- typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
-
- /* save the statistics so when we bring the interface up again,
- * the values reported to userspace are correct.
- */
- tp->card_state = Sleeping;
- smp_wmb();
- typhoon_do_get_stats(tp);
- memcpy(&tp->stats_saved, &tp->stats, sizeof(struct net_device_stats));
-
- INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_HALT);
- typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
-
- if(typhoon_wait_status(ioaddr, TYPHOON_STATUS_HALTED) < 0)
- netdev_err(tp->dev, "timed out waiting for 3XP to halt\n");
-
- if(typhoon_reset(ioaddr, wait_type) < 0) {
- netdev_err(tp->dev, "unable to reset 3XP\n");
- return -ETIMEDOUT;
- }
-
- /* cleanup any outstanding Tx packets */
- if(indexes->txLoCleared != cpu_to_le32(txLo->lastWrite)) {
- indexes->txLoCleared = cpu_to_le32(txLo->lastWrite);
- typhoon_clean_tx(tp, &tp->txLoRing, &indexes->txLoCleared);
- }
-
- return 0;
-}
-
-static void
-typhoon_tx_timeout(struct net_device *dev)
-{
- struct typhoon *tp = netdev_priv(dev);
-
- if(typhoon_reset(tp->ioaddr, WaitNoSleep) < 0) {
- netdev_warn(dev, "could not reset in tx timeout\n");
- goto truly_dead;
- }
-
- /* If we ever start using the Hi ring, it will need cleaning too */
- typhoon_clean_tx(tp, &tp->txLoRing, &tp->indexes->txLoCleared);
- typhoon_free_rx_rings(tp);
-
- if(typhoon_start_runtime(tp) < 0) {
- netdev_err(dev, "could not start runtime in tx timeout\n");
- goto truly_dead;
- }
-
- netif_wake_queue(dev);
- return;
-
-truly_dead:
- /* Reset the hardware, and turn off carrier to avoid more timeouts */
- typhoon_reset(tp->ioaddr, NoWait);
- netif_carrier_off(dev);
-}
-
-static int
-typhoon_open(struct net_device *dev)
-{
- struct typhoon *tp = netdev_priv(dev);
- int err;
-
- err = typhoon_request_firmware(tp);
- if (err)
- goto out;
-
- err = typhoon_wakeup(tp, WaitSleep);
- if(err < 0) {
- netdev_err(dev, "unable to wakeup device\n");
- goto out_sleep;
- }
-
- err = request_irq(dev->irq, typhoon_interrupt, IRQF_SHARED,
- dev->name, dev);
- if(err < 0)
- goto out_sleep;
-
- napi_enable(&tp->napi);
-
- err = typhoon_start_runtime(tp);
- if(err < 0) {
- napi_disable(&tp->napi);
- goto out_irq;
- }
-
- netif_start_queue(dev);
- return 0;
-
-out_irq:
- free_irq(dev->irq, dev);
-
-out_sleep:
- if(typhoon_boot_3XP(tp, TYPHOON_STATUS_WAITING_FOR_HOST) < 0) {
- netdev_err(dev, "unable to reboot into sleep img\n");
- typhoon_reset(tp->ioaddr, NoWait);
- goto out;
- }
-
- if(typhoon_sleep(tp, PCI_D3hot, 0) < 0)
- netdev_err(dev, "unable to go back to sleep\n");
-
-out:
- return err;
-}
-
-static int
-typhoon_close(struct net_device *dev)
-{
- struct typhoon *tp = netdev_priv(dev);
-
- netif_stop_queue(dev);
- napi_disable(&tp->napi);
-
- if(typhoon_stop_runtime(tp, WaitSleep) < 0)
- netdev_err(dev, "unable to stop runtime\n");
-
- /* Make sure there is no irq handler running on a different CPU. */
- free_irq(dev->irq, dev);
-
- typhoon_free_rx_rings(tp);
- typhoon_init_rings(tp);
-
- if(typhoon_boot_3XP(tp, TYPHOON_STATUS_WAITING_FOR_HOST) < 0)
- netdev_err(dev, "unable to boot sleep image\n");
-
- if(typhoon_sleep(tp, PCI_D3hot, 0) < 0)
- netdev_err(dev, "unable to put card to sleep\n");
-
- return 0;
-}
-
-#ifdef CONFIG_PM
-static int
-typhoon_resume(struct pci_dev *pdev)
-{
- struct net_device *dev = pci_get_drvdata(pdev);
- struct typhoon *tp = netdev_priv(dev);
-
- /* If we're down, resume when we are upped.
- */
- if(!netif_running(dev))
- return 0;
-
- if(typhoon_wakeup(tp, WaitNoSleep) < 0) {
- netdev_err(dev, "critical: could not wake up in resume\n");
- goto reset;
- }
-
- if(typhoon_start_runtime(tp) < 0) {
- netdev_err(dev, "critical: could not start runtime in resume\n");
- goto reset;
- }
-
- netif_device_attach(dev);
- return 0;
-
-reset:
- typhoon_reset(tp->ioaddr, NoWait);
- return -EBUSY;
-}
-
-static int
-typhoon_suspend(struct pci_dev *pdev, pm_message_t state)
-{
- struct net_device *dev = pci_get_drvdata(pdev);
- struct typhoon *tp = netdev_priv(dev);
- struct cmd_desc xp_cmd;
-
- /* If we're down, we're already suspended.
- */
- if(!netif_running(dev))
- return 0;
-
- /* TYPHOON_OFFLOAD_VLAN is always on now, so this doesn't work */
- if(tp->wol_events & TYPHOON_WAKE_MAGIC_PKT)
- netdev_warn(dev, "cannot do WAKE_MAGIC with VLAN offloading\n");
-
- netif_device_detach(dev);
-
- if(typhoon_stop_runtime(tp, WaitNoSleep) < 0) {
- netdev_err(dev, "unable to stop runtime\n");
- goto need_resume;
- }
-
- typhoon_free_rx_rings(tp);
- typhoon_init_rings(tp);
-
- if(typhoon_boot_3XP(tp, TYPHOON_STATUS_WAITING_FOR_HOST) < 0) {
- netdev_err(dev, "unable to boot sleep image\n");
- goto need_resume;
- }
-
- INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_SET_MAC_ADDRESS);
- xp_cmd.parm1 = cpu_to_le16(ntohs(*(__be16 *)&dev->dev_addr[0]));
- xp_cmd.parm2 = cpu_to_le32(ntohl(*(__be32 *)&dev->dev_addr[2]));
- if(typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL) < 0) {
- netdev_err(dev, "unable to set mac address in suspend\n");
- goto need_resume;
- }
-
- INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_SET_RX_FILTER);
- xp_cmd.parm1 = TYPHOON_RX_FILTER_DIRECTED | TYPHOON_RX_FILTER_BROADCAST;
- if(typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL) < 0) {
- netdev_err(dev, "unable to set rx filter in suspend\n");
- goto need_resume;
- }
-
- if(typhoon_sleep(tp, pci_choose_state(pdev, state), tp->wol_events) < 0) {
- netdev_err(dev, "unable to put card to sleep\n");
- goto need_resume;
- }
-
- return 0;
-
-need_resume:
- typhoon_resume(pdev);
- return -EBUSY;
-}
-#endif
-
-static int __devinit
-typhoon_test_mmio(struct pci_dev *pdev)
-{
- void __iomem *ioaddr = pci_iomap(pdev, 1, 128);
- int mode = 0;
- u32 val;
-
- if(!ioaddr)
- goto out;
-
- if(ioread32(ioaddr + TYPHOON_REG_STATUS) !=
- TYPHOON_STATUS_WAITING_FOR_HOST)
- goto out_unmap;
-
- iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_MASK);
- iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_STATUS);
- iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_ENABLE);
-
- /* Ok, see if we can change our interrupt status register by
- * sending ourselves an interrupt. If so, then MMIO works.
- * The 50usec delay is arbitrary -- it could probably be smaller.
- */
- val = ioread32(ioaddr + TYPHOON_REG_INTR_STATUS);
- if((val & TYPHOON_INTR_SELF) == 0) {
- iowrite32(1, ioaddr + TYPHOON_REG_SELF_INTERRUPT);
- ioread32(ioaddr + TYPHOON_REG_INTR_STATUS);
- udelay(50);
- val = ioread32(ioaddr + TYPHOON_REG_INTR_STATUS);
- if(val & TYPHOON_INTR_SELF)
- mode = 1;
- }
-
- iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_MASK);
- iowrite32(TYPHOON_INTR_ALL, ioaddr + TYPHOON_REG_INTR_STATUS);
- iowrite32(TYPHOON_INTR_NONE, ioaddr + TYPHOON_REG_INTR_ENABLE);
- ioread32(ioaddr + TYPHOON_REG_INTR_STATUS);
-
-out_unmap:
- pci_iounmap(pdev, ioaddr);
-
-out:
- if(!mode)
- pr_info("%s: falling back to port IO\n", pci_name(pdev));
- return mode;
-}
-
-static const struct net_device_ops typhoon_netdev_ops = {
- .ndo_open = typhoon_open,
- .ndo_stop = typhoon_close,
- .ndo_start_xmit = typhoon_start_tx,
- .ndo_set_multicast_list = typhoon_set_rx_mode,
- .ndo_tx_timeout = typhoon_tx_timeout,
- .ndo_get_stats = typhoon_get_stats,
- .ndo_validate_addr = eth_validate_addr,
- .ndo_set_mac_address = typhoon_set_mac_address,
- .ndo_change_mtu = eth_change_mtu,
-};
-
-static int __devinit
-typhoon_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
-{
- struct net_device *dev;
- struct typhoon *tp;
- int card_id = (int) ent->driver_data;
- void __iomem *ioaddr;
- void *shared;
- dma_addr_t shared_dma;
- struct cmd_desc xp_cmd;
- struct resp_desc xp_resp[3];
- int err = 0;
- const char *err_msg;
-
- dev = alloc_etherdev(sizeof(*tp));
- if(dev == NULL) {
- err_msg = "unable to alloc new net device";
- err = -ENOMEM;
- goto error_out;
- }
- SET_NETDEV_DEV(dev, &pdev->dev);
-
- err = pci_enable_device(pdev);
- if(err < 0) {
- err_msg = "unable to enable device";
- goto error_out_dev;
- }
-
- err = pci_set_mwi(pdev);
- if(err < 0) {
- err_msg = "unable to set MWI";
- goto error_out_disable;
- }
-
- err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
- if(err < 0) {
- err_msg = "No usable DMA configuration";
- goto error_out_mwi;
- }
-
- /* sanity checks on IO and MMIO BARs
- */
- if(!(pci_resource_flags(pdev, 0) & IORESOURCE_IO)) {
- err_msg = "region #1 not a PCI IO resource, aborting";
- err = -ENODEV;
- goto error_out_mwi;
- }
- if(pci_resource_len(pdev, 0) < 128) {
- err_msg = "Invalid PCI IO region size, aborting";
- err = -ENODEV;
- goto error_out_mwi;
- }
- if(!(pci_resource_flags(pdev, 1) & IORESOURCE_MEM)) {
- err_msg = "region #1 not a PCI MMIO resource, aborting";
- err = -ENODEV;
- goto error_out_mwi;
- }
- if(pci_resource_len(pdev, 1) < 128) {
- err_msg = "Invalid PCI MMIO region size, aborting";
- err = -ENODEV;
- goto error_out_mwi;
- }
-
- err = pci_request_regions(pdev, KBUILD_MODNAME);
- if(err < 0) {
- err_msg = "could not request regions";
- goto error_out_mwi;
- }
-
- /* map our registers
- */
- if(use_mmio != 0 && use_mmio != 1)
- use_mmio = typhoon_test_mmio(pdev);
-
- ioaddr = pci_iomap(pdev, use_mmio, 128);
- if (!ioaddr) {
- err_msg = "cannot remap registers, aborting";
- err = -EIO;
- goto error_out_regions;
- }
-
- /* allocate pci dma space for rx and tx descriptor rings
- */
- shared = pci_alloc_consistent(pdev, sizeof(struct typhoon_shared),
- &shared_dma);
- if(!shared) {
- err_msg = "could not allocate DMA memory";
- err = -ENOMEM;
- goto error_out_remap;
- }
-
- dev->irq = pdev->irq;
- tp = netdev_priv(dev);
- tp->shared = shared;
- tp->shared_dma = shared_dma;
- tp->pdev = pdev;
- tp->tx_pdev = pdev;
- tp->ioaddr = ioaddr;
- tp->tx_ioaddr = ioaddr;
- tp->dev = dev;
-
- /* Init sequence:
- * 1) Reset the adapter to clear any bad juju
- * 2) Reload the sleep image
- * 3) Boot the sleep image
- * 4) Get the hardware address.
- * 5) Put the card to sleep.
- */
- if (typhoon_reset(ioaddr, WaitSleep) < 0) {
- err_msg = "could not reset 3XP";
- err = -EIO;
- goto error_out_dma;
- }
-
- /* Now that we've reset the 3XP and are sure it's not going to
- * write all over memory, enable bus mastering, and save our
- * state for resuming after a suspend.
- */
- pci_set_master(pdev);
- pci_save_state(pdev);
-
- typhoon_init_interface(tp);
- typhoon_init_rings(tp);
-
- if(typhoon_boot_3XP(tp, TYPHOON_STATUS_WAITING_FOR_HOST) < 0) {
- err_msg = "cannot boot 3XP sleep image";
- err = -EIO;
- goto error_out_reset;
- }
-
- INIT_COMMAND_WITH_RESPONSE(&xp_cmd, TYPHOON_CMD_READ_MAC_ADDRESS);
- if(typhoon_issue_command(tp, 1, &xp_cmd, 1, xp_resp) < 0) {
- err_msg = "cannot read MAC address";
- err = -EIO;
- goto error_out_reset;
- }
-
- *(__be16 *)&dev->dev_addr[0] = htons(le16_to_cpu(xp_resp[0].parm1));
- *(__be32 *)&dev->dev_addr[2] = htonl(le32_to_cpu(xp_resp[0].parm2));
-
- if(!is_valid_ether_addr(dev->dev_addr)) {
- err_msg = "Could not obtain valid ethernet address, aborting";
- goto error_out_reset;
- }
-
- /* Read the Sleep Image version last, so the response is valid
- * later when we print out the version reported.
- */
- INIT_COMMAND_WITH_RESPONSE(&xp_cmd, TYPHOON_CMD_READ_VERSIONS);
- if(typhoon_issue_command(tp, 1, &xp_cmd, 3, xp_resp) < 0) {
- err_msg = "Could not get Sleep Image version";
- goto error_out_reset;
- }
-
- tp->capabilities = typhoon_card_info[card_id].capabilities;
- tp->xcvr_select = TYPHOON_XCVR_AUTONEG;
-
- /* Typhoon 1.0 Sleep Images return one response descriptor to the
- * READ_VERSIONS command. Those versions are OK after waking up
- * from sleep without needing a reset. Typhoon 1.1+ Sleep Images
- * seem to need a little extra help to get started. Since we don't
- * know how to nudge it along, just kick it.
- */
- if(xp_resp[0].numDesc != 0)
- tp->capabilities |= TYPHOON_WAKEUP_NEEDS_RESET;
-
- if(typhoon_sleep(tp, PCI_D3hot, 0) < 0) {
- err_msg = "cannot put adapter to sleep";
- err = -EIO;
- goto error_out_reset;
- }
-
- /* The chip-specific entries in the device structure. */
- dev->netdev_ops = &typhoon_netdev_ops;
- netif_napi_add(dev, &tp->napi, typhoon_poll, 16);
- dev->watchdog_timeo = TX_TIMEOUT;
-
- SET_ETHTOOL_OPS(dev, &typhoon_ethtool_ops);
-
- /* We can handle scatter gather, up to 16 entries, and
- * we can do IP checksumming (only version 4, doh...)
- *
- * There's no way to turn off the RX VLAN offloading and stripping
- * on the current 3XP firmware -- it does not respect the offload
- * settings -- so we only allow the user to toggle the TX processing.
- */
- dev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO |
- NETIF_F_HW_VLAN_TX;
- dev->features = dev->hw_features |
- NETIF_F_HW_VLAN_RX | NETIF_F_RXCSUM;
-
- if(register_netdev(dev) < 0) {
- err_msg = "unable to register netdev";
- goto error_out_reset;
- }
-
- pci_set_drvdata(pdev, dev);
-
- netdev_info(dev, "%s at %s 0x%llx, %pM\n",
- typhoon_card_info[card_id].name,
- use_mmio ? "MMIO" : "IO",
- (unsigned long long)pci_resource_start(pdev, use_mmio),
- dev->dev_addr);
-
- /* xp_resp still contains the response to the READ_VERSIONS command.
- * For debugging, let the user know what version he has.
- */
- if(xp_resp[0].numDesc == 0) {
- /* This is the Typhoon 1.0 type Sleep Image, last 16 bits
- * of version is Month/Day of build.
- */
- u16 monthday = le32_to_cpu(xp_resp[0].parm2) & 0xffff;
- netdev_info(dev, "Typhoon 1.0 Sleep Image built %02u/%02u/2000\n",
- monthday >> 8, monthday & 0xff);
- } else if(xp_resp[0].numDesc == 2) {
- /* This is the Typhoon 1.1+ type Sleep Image
- */
- u32 sleep_ver = le32_to_cpu(xp_resp[0].parm2);
- u8 *ver_string = (u8 *) &xp_resp[1];
- ver_string[25] = 0;
- netdev_info(dev, "Typhoon 1.1+ Sleep Image version %02x.%03x.%03x %s\n",
- sleep_ver >> 24, (sleep_ver >> 12) & 0xfff,
- sleep_ver & 0xfff, ver_string);
- } else {
- netdev_warn(dev, "Unknown Sleep Image version (%u:%04x)\n",
- xp_resp[0].numDesc, le32_to_cpu(xp_resp[0].parm2));
- }
-
- return 0;
-
-error_out_reset:
- typhoon_reset(ioaddr, NoWait);
-
-error_out_dma:
- pci_free_consistent(pdev, sizeof(struct typhoon_shared),
- shared, shared_dma);
-error_out_remap:
- pci_iounmap(pdev, ioaddr);
-error_out_regions:
- pci_release_regions(pdev);
-error_out_mwi:
- pci_clear_mwi(pdev);
-error_out_disable:
- pci_disable_device(pdev);
-error_out_dev:
- free_netdev(dev);
-error_out:
- pr_err("%s: %s\n", pci_name(pdev), err_msg);
- return err;
-}
-
-static void __devexit
-typhoon_remove_one(struct pci_dev *pdev)
-{
- struct net_device *dev = pci_get_drvdata(pdev);
- struct typhoon *tp = netdev_priv(dev);
-
- unregister_netdev(dev);
- pci_set_power_state(pdev, PCI_D0);
- pci_restore_state(pdev);
- typhoon_reset(tp->ioaddr, NoWait);
- pci_iounmap(pdev, tp->ioaddr);
- pci_free_consistent(pdev, sizeof(struct typhoon_shared),
- tp->shared, tp->shared_dma);
- pci_release_regions(pdev);
- pci_clear_mwi(pdev);
- pci_disable_device(pdev);
- pci_set_drvdata(pdev, NULL);
- free_netdev(dev);
-}
-
-static struct pci_driver typhoon_driver = {
- .name = KBUILD_MODNAME,
- .id_table = typhoon_pci_tbl,
- .probe = typhoon_init_one,
- .remove = __devexit_p(typhoon_remove_one),
-#ifdef CONFIG_PM
- .suspend = typhoon_suspend,
- .resume = typhoon_resume,
-#endif
-};
-
-static int __init
-typhoon_init(void)
-{
- return pci_register_driver(&typhoon_driver);
-}
-
-static void __exit
-typhoon_cleanup(void)
-{
- if (typhoon_fw)
- release_firmware(typhoon_fw);
- pci_unregister_driver(&typhoon_driver);
-}
-
-module_init(typhoon_init);
-module_exit(typhoon_cleanup);
+++ /dev/null
-/* typhoon.h: chip info for the 3Com 3CR990 family of controllers */
-/*
- Written 2002-2003 by David Dillow <dave@thedillows.org>
-
- This software may be used and distributed according to the terms of
- the GNU General Public License (GPL), incorporated herein by reference.
- Drivers based on or derived from this code fall under the GPL and must
- retain the authorship, copyright and license notice. This file is not
- a complete program and may only be used when the entire operating
- system is licensed under the GPL.
-
- This software is available on a public web site. It may enable
- cryptographic capabilities of the 3Com hardware, and may be
- exported from the United States under License Exception "TSU"
- pursuant to 15 C.F.R. Section 740.13(e).
-
- This work was funded by the National Library of Medicine under
- the Department of Energy project number 0274DD06D1 and NLM project
- number Y1-LM-2015-01.
-*/
-
-/* All Typhoon ring positions are specificed in bytes, and point to the
- * first "clean" entry in the ring -- ie the next entry we use for whatever
- * purpose.
- */
-
-/* The Typhoon basic ring
- * ringBase: where this ring lives (our virtual address)
- * lastWrite: the next entry we'll use
- */
-struct basic_ring {
- u8 *ringBase;
- u32 lastWrite;
-};
-
-/* The Typoon transmit ring -- same as a basic ring, plus:
- * lastRead: where we're at in regard to cleaning up the ring
- * writeRegister: register to use for writing (different for Hi & Lo rings)
- */
-struct transmit_ring {
- u8 *ringBase;
- u32 lastWrite;
- u32 lastRead;
- int writeRegister;
-};
-
-/* The host<->Typhoon ring index structure
- * This indicates the current positions in the rings
- *
- * All values must be in little endian format for the 3XP
- *
- * rxHiCleared: entry we've cleared to in the Hi receive ring
- * rxLoCleared: entry we've cleared to in the Lo receive ring
- * rxBuffReady: next entry we'll put a free buffer in
- * respCleared: entry we've cleared to in the response ring
- *
- * txLoCleared: entry the NIC has cleared to in the Lo transmit ring
- * txHiCleared: entry the NIC has cleared to in the Hi transmit ring
- * rxLoReady: entry the NIC has filled to in the Lo receive ring
- * rxBuffCleared: entry the NIC has cleared in the free buffer ring
- * cmdCleared: entry the NIC has cleared in the command ring
- * respReady: entry the NIC has filled to in the response ring
- * rxHiReady: entry the NIC has filled to in the Hi receive ring
- */
-struct typhoon_indexes {
- /* The first four are written by the host, and read by the NIC */
- volatile __le32 rxHiCleared;
- volatile __le32 rxLoCleared;
- volatile __le32 rxBuffReady;
- volatile __le32 respCleared;
-
- /* The remaining are written by the NIC, and read by the host */
- volatile __le32 txLoCleared;
- volatile __le32 txHiCleared;
- volatile __le32 rxLoReady;
- volatile __le32 rxBuffCleared;
- volatile __le32 cmdCleared;
- volatile __le32 respReady;
- volatile __le32 rxHiReady;
-} __packed;
-
-/* The host<->Typhoon interface
- * Our means of communicating where things are
- *
- * All values must be in little endian format for the 3XP
- *
- * ringIndex: 64 bit bus address of the index structure
- * txLoAddr: 64 bit bus address of the Lo transmit ring
- * txLoSize: size (in bytes) of the Lo transmit ring
- * txHi*: as above for the Hi priority transmit ring
- * rxLo*: as above for the Lo priority receive ring
- * rxBuff*: as above for the free buffer ring
- * cmd*: as above for the command ring
- * resp*: as above for the response ring
- * zeroAddr: 64 bit bus address of a zero word (for DMA)
- * rxHi*: as above for the Hi Priority receive ring
- *
- * While there is room for 64 bit addresses, current versions of the 3XP
- * only do 32 bit addresses, so the *Hi for each of the above will always
- * be zero.
- */
-struct typhoon_interface {
- __le32 ringIndex;
- __le32 ringIndexHi;
- __le32 txLoAddr;
- __le32 txLoAddrHi;
- __le32 txLoSize;
- __le32 txHiAddr;
- __le32 txHiAddrHi;
- __le32 txHiSize;
- __le32 rxLoAddr;
- __le32 rxLoAddrHi;
- __le32 rxLoSize;
- __le32 rxBuffAddr;
- __le32 rxBuffAddrHi;
- __le32 rxBuffSize;
- __le32 cmdAddr;
- __le32 cmdAddrHi;
- __le32 cmdSize;
- __le32 respAddr;
- __le32 respAddrHi;
- __le32 respSize;
- __le32 zeroAddr;
- __le32 zeroAddrHi;
- __le32 rxHiAddr;
- __le32 rxHiAddrHi;
- __le32 rxHiSize;
-} __packed;
-
-/* The Typhoon transmit/fragment descriptor
- *
- * A packet is described by a packet descriptor, followed by option descriptors,
- * if any, then one or more fragment descriptors.
- *
- * Packet descriptor:
- * flags: Descriptor type
- * len:i zero, or length of this packet
- * addr*: 8 bytes of opaque data to the firmware -- for skb pointer
- * processFlags: Determine offload tasks to perform on this packet.
- *
- * Fragment descriptor:
- * flags: Descriptor type
- * len:i length of this fragment
- * addr: low bytes of DMA address for this part of the packet
- * addrHi: hi bytes of DMA address for this part of the packet
- * processFlags: must be zero
- *
- * TYPHOON_DESC_VALID is not mentioned in their docs, but their Linux
- * driver uses it.
- */
-struct tx_desc {
- u8 flags;
-#define TYPHOON_TYPE_MASK 0x07
-#define TYPHOON_FRAG_DESC 0x00
-#define TYPHOON_TX_DESC 0x01
-#define TYPHOON_CMD_DESC 0x02
-#define TYPHOON_OPT_DESC 0x03
-#define TYPHOON_RX_DESC 0x04
-#define TYPHOON_RESP_DESC 0x05
-#define TYPHOON_OPT_TYPE_MASK 0xf0
-#define TYPHOON_OPT_IPSEC 0x00
-#define TYPHOON_OPT_TCP_SEG 0x10
-#define TYPHOON_CMD_RESPOND 0x40
-#define TYPHOON_RESP_ERROR 0x40
-#define TYPHOON_RX_ERROR 0x40
-#define TYPHOON_DESC_VALID 0x80
- u8 numDesc;
- __le16 len;
- union {
- struct {
- __le32 addr;
- __le32 addrHi;
- } frag;
- u64 tx_addr; /* opaque for hardware, for TX_DESC */
- };
- __le32 processFlags;
-#define TYPHOON_TX_PF_NO_CRC cpu_to_le32(0x00000001)
-#define TYPHOON_TX_PF_IP_CHKSUM cpu_to_le32(0x00000002)
-#define TYPHOON_TX_PF_TCP_CHKSUM cpu_to_le32(0x00000004)
-#define TYPHOON_TX_PF_TCP_SEGMENT cpu_to_le32(0x00000008)
-#define TYPHOON_TX_PF_INSERT_VLAN cpu_to_le32(0x00000010)
-#define TYPHOON_TX_PF_IPSEC cpu_to_le32(0x00000020)
-#define TYPHOON_TX_PF_VLAN_PRIORITY cpu_to_le32(0x00000040)
-#define TYPHOON_TX_PF_UDP_CHKSUM cpu_to_le32(0x00000080)
-#define TYPHOON_TX_PF_PAD_FRAME cpu_to_le32(0x00000100)
-#define TYPHOON_TX_PF_RESERVED cpu_to_le32(0x00000e00)
-#define TYPHOON_TX_PF_VLAN_MASK cpu_to_le32(0x0ffff000)
-#define TYPHOON_TX_PF_INTERNAL cpu_to_le32(0xf0000000)
-#define TYPHOON_TX_PF_VLAN_TAG_SHIFT 12
-} __packed;
-
-/* The TCP Segmentation offload option descriptor
- *
- * flags: descriptor type
- * numDesc: must be 1
- * mss_flags: bits 0-11 (little endian) are MSS, 12 is first TSO descriptor
- * 13 is list TSO descriptor, set both if only one TSO
- * respAddrLo: low bytes of address of the bytesTx field of this descriptor
- * bytesTx: total number of bytes in this TSO request
- * status: 0 on completion
- */
-struct tcpopt_desc {
- u8 flags;
- u8 numDesc;
- __le16 mss_flags;
-#define TYPHOON_TSO_FIRST cpu_to_le16(0x1000)
-#define TYPHOON_TSO_LAST cpu_to_le16(0x2000)
- __le32 respAddrLo;
- __le32 bytesTx;
- __le32 status;
-} __packed;
-
-/* The IPSEC Offload descriptor
- *
- * flags: descriptor type
- * numDesc: must be 1
- * ipsecFlags: bit 0: 0 -- generate IV, 1 -- use supplied IV
- * sa1, sa2: Security Association IDs for this packet
- * reserved: set to 0
- */
-struct ipsec_desc {
- u8 flags;
- u8 numDesc;
- __le16 ipsecFlags;
-#define TYPHOON_IPSEC_GEN_IV cpu_to_le16(0x0000)
-#define TYPHOON_IPSEC_USE_IV cpu_to_le16(0x0001)
- __le32 sa1;
- __le32 sa2;
- __le32 reserved;
-} __packed;
-
-/* The Typhoon receive descriptor (Updated by NIC)
- *
- * flags: Descriptor type, error indication
- * numDesc: Always zero
- * frameLen: the size of the packet received
- * addr: low 32 bytes of the virtual addr passed in for this buffer
- * addrHi: high 32 bytes of the virtual addr passed in for this buffer
- * rxStatus: Error if set in flags, otherwise result of offload processing
- * filterResults: results of filtering on packet, not used
- * ipsecResults: Results of IPSEC processing
- * vlanTag: the 801.2q TCI from the packet
- */
-struct rx_desc {
- u8 flags;
- u8 numDesc;
- __le16 frameLen;
- u32 addr; /* opaque, comes from virtAddr */
- u32 addrHi; /* opaque, comes from virtAddrHi */
- __le32 rxStatus;
-#define TYPHOON_RX_ERR_INTERNAL cpu_to_le32(0x00000000)
-#define TYPHOON_RX_ERR_FIFO_UNDERRUN cpu_to_le32(0x00000001)
-#define TYPHOON_RX_ERR_BAD_SSD cpu_to_le32(0x00000002)
-#define TYPHOON_RX_ERR_RUNT cpu_to_le32(0x00000003)
-#define TYPHOON_RX_ERR_CRC cpu_to_le32(0x00000004)
-#define TYPHOON_RX_ERR_OVERSIZE cpu_to_le32(0x00000005)
-#define TYPHOON_RX_ERR_ALIGN cpu_to_le32(0x00000006)
-#define TYPHOON_RX_ERR_DRIBBLE cpu_to_le32(0x00000007)
-#define TYPHOON_RX_PROTO_MASK cpu_to_le32(0x00000003)
-#define TYPHOON_RX_PROTO_UNKNOWN cpu_to_le32(0x00000000)
-#define TYPHOON_RX_PROTO_IP cpu_to_le32(0x00000001)
-#define TYPHOON_RX_PROTO_IPX cpu_to_le32(0x00000002)
-#define TYPHOON_RX_VLAN cpu_to_le32(0x00000004)
-#define TYPHOON_RX_IP_FRAG cpu_to_le32(0x00000008)
-#define TYPHOON_RX_IPSEC cpu_to_le32(0x00000010)
-#define TYPHOON_RX_IP_CHK_FAIL cpu_to_le32(0x00000020)
-#define TYPHOON_RX_TCP_CHK_FAIL cpu_to_le32(0x00000040)
-#define TYPHOON_RX_UDP_CHK_FAIL cpu_to_le32(0x00000080)
-#define TYPHOON_RX_IP_CHK_GOOD cpu_to_le32(0x00000100)
-#define TYPHOON_RX_TCP_CHK_GOOD cpu_to_le32(0x00000200)
-#define TYPHOON_RX_UDP_CHK_GOOD cpu_to_le32(0x00000400)
- __le16 filterResults;
-#define TYPHOON_RX_FILTER_MASK cpu_to_le16(0x7fff)
-#define TYPHOON_RX_FILTERED cpu_to_le16(0x8000)
- __le16 ipsecResults;
-#define TYPHOON_RX_OUTER_AH_GOOD cpu_to_le16(0x0001)
-#define TYPHOON_RX_OUTER_ESP_GOOD cpu_to_le16(0x0002)
-#define TYPHOON_RX_INNER_AH_GOOD cpu_to_le16(0x0004)
-#define TYPHOON_RX_INNER_ESP_GOOD cpu_to_le16(0x0008)
-#define TYPHOON_RX_OUTER_AH_FAIL cpu_to_le16(0x0010)
-#define TYPHOON_RX_OUTER_ESP_FAIL cpu_to_le16(0x0020)
-#define TYPHOON_RX_INNER_AH_FAIL cpu_to_le16(0x0040)
-#define TYPHOON_RX_INNER_ESP_FAIL cpu_to_le16(0x0080)
-#define TYPHOON_RX_UNKNOWN_SA cpu_to_le16(0x0100)
-#define TYPHOON_RX_ESP_FORMAT_ERR cpu_to_le16(0x0200)
- __be32 vlanTag;
-} __packed;
-
-/* The Typhoon free buffer descriptor, used to give a buffer to the NIC
- *
- * physAddr: low 32 bits of the bus address of the buffer
- * physAddrHi: high 32 bits of the bus address of the buffer, always zero
- * virtAddr: low 32 bits of the skb address
- * virtAddrHi: high 32 bits of the skb address, always zero
- *
- * the virt* address is basically two 32 bit cookies, just passed back
- * from the NIC
- */
-struct rx_free {
- __le32 physAddr;
- __le32 physAddrHi;
- u32 virtAddr;
- u32 virtAddrHi;
-} __packed;
-
-/* The Typhoon command descriptor, used for commands and responses
- *
- * flags: descriptor type
- * numDesc: number of descriptors following in this command/response,
- * ie, zero for a one descriptor command
- * cmd: the command
- * seqNo: sequence number (unused)
- * parm1: use varies by command
- * parm2: use varies by command
- * parm3: use varies by command
- */
-struct cmd_desc {
- u8 flags;
- u8 numDesc;
- __le16 cmd;
-#define TYPHOON_CMD_TX_ENABLE cpu_to_le16(0x0001)
-#define TYPHOON_CMD_TX_DISABLE cpu_to_le16(0x0002)
-#define TYPHOON_CMD_RX_ENABLE cpu_to_le16(0x0003)
-#define TYPHOON_CMD_RX_DISABLE cpu_to_le16(0x0004)
-#define TYPHOON_CMD_SET_RX_FILTER cpu_to_le16(0x0005)
-#define TYPHOON_CMD_READ_STATS cpu_to_le16(0x0007)
-#define TYPHOON_CMD_XCVR_SELECT cpu_to_le16(0x0013)
-#define TYPHOON_CMD_SET_MAX_PKT_SIZE cpu_to_le16(0x001a)
-#define TYPHOON_CMD_READ_MEDIA_STATUS cpu_to_le16(0x001b)
-#define TYPHOON_CMD_GOTO_SLEEP cpu_to_le16(0x0023)
-#define TYPHOON_CMD_SET_MULTICAST_HASH cpu_to_le16(0x0025)
-#define TYPHOON_CMD_SET_MAC_ADDRESS cpu_to_le16(0x0026)
-#define TYPHOON_CMD_READ_MAC_ADDRESS cpu_to_le16(0x0027)
-#define TYPHOON_CMD_VLAN_TYPE_WRITE cpu_to_le16(0x002b)
-#define TYPHOON_CMD_CREATE_SA cpu_to_le16(0x0034)
-#define TYPHOON_CMD_DELETE_SA cpu_to_le16(0x0035)
-#define TYPHOON_CMD_READ_VERSIONS cpu_to_le16(0x0043)
-#define TYPHOON_CMD_IRQ_COALESCE_CTRL cpu_to_le16(0x0045)
-#define TYPHOON_CMD_ENABLE_WAKE_EVENTS cpu_to_le16(0x0049)
-#define TYPHOON_CMD_SET_OFFLOAD_TASKS cpu_to_le16(0x004f)
-#define TYPHOON_CMD_HELLO_RESP cpu_to_le16(0x0057)
-#define TYPHOON_CMD_HALT cpu_to_le16(0x005d)
-#define TYPHOON_CMD_READ_IPSEC_INFO cpu_to_le16(0x005e)
-#define TYPHOON_CMD_GET_IPSEC_ENABLE cpu_to_le16(0x0067)
-#define TYPHOON_CMD_GET_CMD_LVL cpu_to_le16(0x0069)
- u16 seqNo;
- __le16 parm1;
- __le32 parm2;
- __le32 parm3;
-} __packed;
-
-/* The Typhoon response descriptor, see command descriptor for details
- */
-struct resp_desc {
- u8 flags;
- u8 numDesc;
- __le16 cmd;
- __le16 seqNo;
- __le16 parm1;
- __le32 parm2;
- __le32 parm3;
-} __packed;
-
-#define INIT_COMMAND_NO_RESPONSE(x, command) \
- do { struct cmd_desc *_ptr = (x); \
- memset(_ptr, 0, sizeof(struct cmd_desc)); \
- _ptr->flags = TYPHOON_CMD_DESC | TYPHOON_DESC_VALID; \
- _ptr->cmd = command; \
- } while(0)
-
-/* We set seqNo to 1 if we're expecting a response from this command */
-#define INIT_COMMAND_WITH_RESPONSE(x, command) \
- do { struct cmd_desc *_ptr = (x); \
- memset(_ptr, 0, sizeof(struct cmd_desc)); \
- _ptr->flags = TYPHOON_CMD_RESPOND | TYPHOON_CMD_DESC; \
- _ptr->flags |= TYPHOON_DESC_VALID; \
- _ptr->cmd = command; \
- _ptr->seqNo = 1; \
- } while(0)
-
-/* TYPHOON_CMD_SET_RX_FILTER filter bits (cmd.parm1)
- */
-#define TYPHOON_RX_FILTER_DIRECTED cpu_to_le16(0x0001)
-#define TYPHOON_RX_FILTER_ALL_MCAST cpu_to_le16(0x0002)
-#define TYPHOON_RX_FILTER_BROADCAST cpu_to_le16(0x0004)
-#define TYPHOON_RX_FILTER_PROMISCOUS cpu_to_le16(0x0008)
-#define TYPHOON_RX_FILTER_MCAST_HASH cpu_to_le16(0x0010)
-
-/* TYPHOON_CMD_READ_STATS response format
- */
-struct stats_resp {
- u8 flags;
- u8 numDesc;
- __le16 cmd;
- __le16 seqNo;
- __le16 unused;
- __le32 txPackets;
- __le64 txBytes;
- __le32 txDeferred;
- __le32 txLateCollisions;
- __le32 txCollisions;
- __le32 txCarrierLost;
- __le32 txMultipleCollisions;
- __le32 txExcessiveCollisions;
- __le32 txFifoUnderruns;
- __le32 txMulticastTxOverflows;
- __le32 txFiltered;
- __le32 rxPacketsGood;
- __le64 rxBytesGood;
- __le32 rxFifoOverruns;
- __le32 BadSSD;
- __le32 rxCrcErrors;
- __le32 rxOversized;
- __le32 rxBroadcast;
- __le32 rxMulticast;
- __le32 rxOverflow;
- __le32 rxFiltered;
- __le32 linkStatus;
-#define TYPHOON_LINK_STAT_MASK cpu_to_le32(0x00000001)
-#define TYPHOON_LINK_GOOD cpu_to_le32(0x00000001)
-#define TYPHOON_LINK_BAD cpu_to_le32(0x00000000)
-#define TYPHOON_LINK_SPEED_MASK cpu_to_le32(0x00000002)
-#define TYPHOON_LINK_100MBPS cpu_to_le32(0x00000002)
-#define TYPHOON_LINK_10MBPS cpu_to_le32(0x00000000)
-#define TYPHOON_LINK_DUPLEX_MASK cpu_to_le32(0x00000004)
-#define TYPHOON_LINK_FULL_DUPLEX cpu_to_le32(0x00000004)
-#define TYPHOON_LINK_HALF_DUPLEX cpu_to_le32(0x00000000)
- __le32 unused2;
- __le32 unused3;
-} __packed;
-
-/* TYPHOON_CMD_XCVR_SELECT xcvr values (resp.parm1)
- */
-#define TYPHOON_XCVR_10HALF cpu_to_le16(0x0000)
-#define TYPHOON_XCVR_10FULL cpu_to_le16(0x0001)
-#define TYPHOON_XCVR_100HALF cpu_to_le16(0x0002)
-#define TYPHOON_XCVR_100FULL cpu_to_le16(0x0003)
-#define TYPHOON_XCVR_AUTONEG cpu_to_le16(0x0004)
-
-/* TYPHOON_CMD_READ_MEDIA_STATUS (resp.parm1)
- */
-#define TYPHOON_MEDIA_STAT_CRC_STRIP_DISABLE cpu_to_le16(0x0004)
-#define TYPHOON_MEDIA_STAT_COLLISION_DETECT cpu_to_le16(0x0010)
-#define TYPHOON_MEDIA_STAT_CARRIER_SENSE cpu_to_le16(0x0020)
-#define TYPHOON_MEDIA_STAT_POLARITY_REV cpu_to_le16(0x0400)
-#define TYPHOON_MEDIA_STAT_NO_LINK cpu_to_le16(0x0800)
-
-/* TYPHOON_CMD_SET_MULTICAST_HASH enable values (cmd.parm1)
- */
-#define TYPHOON_MCAST_HASH_DISABLE cpu_to_le16(0x0000)
-#define TYPHOON_MCAST_HASH_ENABLE cpu_to_le16(0x0001)
-#define TYPHOON_MCAST_HASH_SET cpu_to_le16(0x0002)
-
-/* TYPHOON_CMD_CREATE_SA descriptor and settings
- */
-struct sa_descriptor {
- u8 flags;
- u8 numDesc;
- u16 cmd;
- u16 seqNo;
- u16 mode;
-#define TYPHOON_SA_MODE_NULL cpu_to_le16(0x0000)
-#define TYPHOON_SA_MODE_AH cpu_to_le16(0x0001)
-#define TYPHOON_SA_MODE_ESP cpu_to_le16(0x0002)
- u8 hashFlags;
-#define TYPHOON_SA_HASH_ENABLE 0x01
-#define TYPHOON_SA_HASH_SHA1 0x02
-#define TYPHOON_SA_HASH_MD5 0x04
- u8 direction;
-#define TYPHOON_SA_DIR_RX 0x00
-#define TYPHOON_SA_DIR_TX 0x01
- u8 encryptionFlags;
-#define TYPHOON_SA_ENCRYPT_ENABLE 0x01
-#define TYPHOON_SA_ENCRYPT_DES 0x02
-#define TYPHOON_SA_ENCRYPT_3DES 0x00
-#define TYPHOON_SA_ENCRYPT_3DES_2KEY 0x00
-#define TYPHOON_SA_ENCRYPT_3DES_3KEY 0x04
-#define TYPHOON_SA_ENCRYPT_CBC 0x08
-#define TYPHOON_SA_ENCRYPT_ECB 0x00
- u8 specifyIndex;
-#define TYPHOON_SA_SPECIFY_INDEX 0x01
-#define TYPHOON_SA_GENERATE_INDEX 0x00
- u32 SPI;
- u32 destAddr;
- u32 destMask;
- u8 integKey[20];
- u8 confKey[24];
- u32 index;
- u32 unused;
- u32 unused2;
-} __packed;
-
-/* TYPHOON_CMD_SET_OFFLOAD_TASKS bits (cmd.parm2 (Tx) & cmd.parm3 (Rx))
- * This is all for IPv4.
- */
-#define TYPHOON_OFFLOAD_TCP_CHKSUM cpu_to_le32(0x00000002)
-#define TYPHOON_OFFLOAD_UDP_CHKSUM cpu_to_le32(0x00000004)
-#define TYPHOON_OFFLOAD_IP_CHKSUM cpu_to_le32(0x00000008)
-#define TYPHOON_OFFLOAD_IPSEC cpu_to_le32(0x00000010)
-#define TYPHOON_OFFLOAD_BCAST_THROTTLE cpu_to_le32(0x00000020)
-#define TYPHOON_OFFLOAD_DHCP_PREVENT cpu_to_le32(0x00000040)
-#define TYPHOON_OFFLOAD_VLAN cpu_to_le32(0x00000080)
-#define TYPHOON_OFFLOAD_FILTERING cpu_to_le32(0x00000100)
-#define TYPHOON_OFFLOAD_TCP_SEGMENT cpu_to_le32(0x00000200)
-
-/* TYPHOON_CMD_ENABLE_WAKE_EVENTS bits (cmd.parm1)
- */
-#define TYPHOON_WAKE_MAGIC_PKT cpu_to_le16(0x01)
-#define TYPHOON_WAKE_LINK_EVENT cpu_to_le16(0x02)
-#define TYPHOON_WAKE_ICMP_ECHO cpu_to_le16(0x04)
-#define TYPHOON_WAKE_ARP cpu_to_le16(0x08)
-
-/* These are used to load the firmware image on the NIC
- */
-struct typhoon_file_header {
- u8 tag[8];
- __le32 version;
- __le32 numSections;
- __le32 startAddr;
- __le32 hmacDigest[5];
-} __packed;
-
-struct typhoon_section_header {
- __le32 len;
- u16 checksum;
- u16 reserved;
- __le32 startAddr;
-} __packed;
-
-/* The Typhoon Register offsets
- */
-#define TYPHOON_REG_SOFT_RESET 0x00
-#define TYPHOON_REG_INTR_STATUS 0x04
-#define TYPHOON_REG_INTR_ENABLE 0x08
-#define TYPHOON_REG_INTR_MASK 0x0c
-#define TYPHOON_REG_SELF_INTERRUPT 0x10
-#define TYPHOON_REG_HOST2ARM7 0x14
-#define TYPHOON_REG_HOST2ARM6 0x18
-#define TYPHOON_REG_HOST2ARM5 0x1c
-#define TYPHOON_REG_HOST2ARM4 0x20
-#define TYPHOON_REG_HOST2ARM3 0x24
-#define TYPHOON_REG_HOST2ARM2 0x28
-#define TYPHOON_REG_HOST2ARM1 0x2c
-#define TYPHOON_REG_HOST2ARM0 0x30
-#define TYPHOON_REG_ARM2HOST3 0x34
-#define TYPHOON_REG_ARM2HOST2 0x38
-#define TYPHOON_REG_ARM2HOST1 0x3c
-#define TYPHOON_REG_ARM2HOST0 0x40
-
-#define TYPHOON_REG_BOOT_DATA_LO TYPHOON_REG_HOST2ARM5
-#define TYPHOON_REG_BOOT_DATA_HI TYPHOON_REG_HOST2ARM4
-#define TYPHOON_REG_BOOT_DEST_ADDR TYPHOON_REG_HOST2ARM3
-#define TYPHOON_REG_BOOT_CHECKSUM TYPHOON_REG_HOST2ARM2
-#define TYPHOON_REG_BOOT_LENGTH TYPHOON_REG_HOST2ARM1
-
-#define TYPHOON_REG_DOWNLOAD_BOOT_ADDR TYPHOON_REG_HOST2ARM1
-#define TYPHOON_REG_DOWNLOAD_HMAC_0 TYPHOON_REG_HOST2ARM2
-#define TYPHOON_REG_DOWNLOAD_HMAC_1 TYPHOON_REG_HOST2ARM3
-#define TYPHOON_REG_DOWNLOAD_HMAC_2 TYPHOON_REG_HOST2ARM4
-#define TYPHOON_REG_DOWNLOAD_HMAC_3 TYPHOON_REG_HOST2ARM5
-#define TYPHOON_REG_DOWNLOAD_HMAC_4 TYPHOON_REG_HOST2ARM6
-
-#define TYPHOON_REG_BOOT_RECORD_ADDR_HI TYPHOON_REG_HOST2ARM2
-#define TYPHOON_REG_BOOT_RECORD_ADDR_LO TYPHOON_REG_HOST2ARM1
-
-#define TYPHOON_REG_TX_LO_READY TYPHOON_REG_HOST2ARM3
-#define TYPHOON_REG_CMD_READY TYPHOON_REG_HOST2ARM2
-#define TYPHOON_REG_TX_HI_READY TYPHOON_REG_HOST2ARM1
-
-#define TYPHOON_REG_COMMAND TYPHOON_REG_HOST2ARM0
-#define TYPHOON_REG_HEARTBEAT TYPHOON_REG_ARM2HOST3
-#define TYPHOON_REG_STATUS TYPHOON_REG_ARM2HOST0
-
-/* 3XP Reset values (TYPHOON_REG_SOFT_RESET)
- */
-#define TYPHOON_RESET_ALL 0x7f
-#define TYPHOON_RESET_NONE 0x00
-
-/* 3XP irq bits (TYPHOON_REG_INTR{STATUS,ENABLE,MASK})
- *
- * Some of these came from OpenBSD, as the 3Com docs have it wrong
- * (INTR_SELF) or don't list it at all (INTR_*_ABORT)
- *
- * Enabling irqs on the Heartbeat reg (ArmToHost3) gets you an irq
- * about every 8ms, so don't do it.
- */
-#define TYPHOON_INTR_HOST_INT 0x00000001
-#define TYPHOON_INTR_ARM2HOST0 0x00000002
-#define TYPHOON_INTR_ARM2HOST1 0x00000004
-#define TYPHOON_INTR_ARM2HOST2 0x00000008
-#define TYPHOON_INTR_ARM2HOST3 0x00000010
-#define TYPHOON_INTR_DMA0 0x00000020
-#define TYPHOON_INTR_DMA1 0x00000040
-#define TYPHOON_INTR_DMA2 0x00000080
-#define TYPHOON_INTR_DMA3 0x00000100
-#define TYPHOON_INTR_MASTER_ABORT 0x00000200
-#define TYPHOON_INTR_TARGET_ABORT 0x00000400
-#define TYPHOON_INTR_SELF 0x00000800
-#define TYPHOON_INTR_RESERVED 0xfffff000
-
-#define TYPHOON_INTR_BOOTCMD TYPHOON_INTR_ARM2HOST0
-
-#define TYPHOON_INTR_ENABLE_ALL 0xffffffef
-#define TYPHOON_INTR_ALL 0xffffffff
-#define TYPHOON_INTR_NONE 0x00000000
-
-/* The commands for the 3XP chip (TYPHOON_REG_COMMAND)
- */
-#define TYPHOON_BOOTCMD_BOOT 0x00
-#define TYPHOON_BOOTCMD_WAKEUP 0xfa
-#define TYPHOON_BOOTCMD_DNLD_COMPLETE 0xfb
-#define TYPHOON_BOOTCMD_SEG_AVAILABLE 0xfc
-#define TYPHOON_BOOTCMD_RUNTIME_IMAGE 0xfd
-#define TYPHOON_BOOTCMD_REG_BOOT_RECORD 0xff
-
-/* 3XP Status values (TYPHOON_REG_STATUS)
- */
-#define TYPHOON_STATUS_WAITING_FOR_BOOT 0x07
-#define TYPHOON_STATUS_SECOND_INIT 0x08
-#define TYPHOON_STATUS_RUNNING 0x09
-#define TYPHOON_STATUS_WAITING_FOR_HOST 0x0d
-#define TYPHOON_STATUS_WAITING_FOR_SEGMENT 0x10
-#define TYPHOON_STATUS_SLEEPING 0x11
-#define TYPHOON_STATUS_HALTED 0x14