Skip to content

Commit

Permalink
xen-netfront: use napi_complete() correctly to prevent Rx stalling
Browse files Browse the repository at this point in the history
After d75b1ad (net: less interrupt
masking in NAPI) the napi instance is removed from the per-cpu list
prior to calling the n->poll(), and is only requeued if all of the
budget was used.  This inadvertently broke netfront because netfront
does not use NAPI correctly.

If netfront had not used all of its budget it would do a final check
for any Rx responses and avoid calling napi_complete() if there were
more responses.  It would still return under budget so it would never
be rescheduled.  The final check would also not re-enable the Rx
interrupt.

Additionally, xenvif_poll() would also call napi_complete() /after/
enabling the interrupt.  This resulted in a race between the
napi_complete() and the napi_schedule() in the interrupt handler.  The
use of local_irq_save/restore() avoided by race iff the handler is
running on the same CPU but not if it was running on a different CPU.

Fix both of these by always calling napi_compete() if the budget was
not all used, and then calling napi_schedule() if the final checks
says there's more work.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
  • Loading branch information
David Vrabel authored and David S. Miller committed Dec 16, 2014
1 parent f1fb521 commit 6a6dc08
Showing 1 changed file with 3 additions and 8 deletions.
11 changes: 3 additions & 8 deletions drivers/net/xen-netfront.c
Original file line number Diff line number Diff line change
Expand Up @@ -977,7 +977,6 @@ static int xennet_poll(struct napi_struct *napi, int budget)
struct sk_buff_head rxq;
struct sk_buff_head errq;
struct sk_buff_head tmpq;
unsigned long flags;
int err;

spin_lock(&queue->rx_lock);
Expand Down Expand Up @@ -1050,15 +1049,11 @@ static int xennet_poll(struct napi_struct *napi, int budget)
if (work_done < budget) {
int more_to_do = 0;

napi_gro_flush(napi, false);

local_irq_save(flags);
napi_complete(napi);

RING_FINAL_CHECK_FOR_RESPONSES(&queue->rx, more_to_do);
if (!more_to_do)
__napi_complete(napi);

local_irq_restore(flags);
if (more_to_do)
napi_schedule(napi);
}

spin_unlock(&queue->rx_lock);
Expand Down

0 comments on commit 6a6dc08

Please sign in to comment.