Skip to content

Commit

Permalink
qdisc: dequeue bulking also pickup GSO/TSO packets
Browse files Browse the repository at this point in the history
The TSO and GSO segmented packets already benefit from bulking
on their own.

The TSO packets have always taken advantage of the only updating
the tailptr once for a large packet.

The GSO segmented packets have recently taken advantage of
bulking xmit_more API, via merge commit 53fda7f ("Merge
branch 'xmit_list'"), specifically via commit 7f2e870 ("net:
Move main gso loop out of dev_hard_start_xmit() into helper.")
allowing qdisc requeue of remaining list.  And via commit
ce93718 ("net: Don't keep around original SKB when we
software segment GSO frames.").

This patch allow further bulking of TSO/GSO packets together,
when dequeueing from the qdisc.

Testing:
 Measuring HoL (Head-of-Line) blocking for TSO and GSO, with
netperf-wrapper. Bulking several TSO show no performance regressions
(requeues were in the area 32 requeues/sec).

Bulking several GSOs does show small regression or very small
improvement (requeues were in the area 8000 requeues/sec).

 Using ixgbe 10Gbit/s with GSO bulking, we can measure some additional
latency. Base-case, which is "normal" GSO bulking, sees varying
high-prio queue delay between 0.38ms to 0.47ms.  Bulking several GSOs
together, result in a stable high-prio queue delay of 0.50ms.

 Using igb at 100Mbit/s with GSO bulking, shows an improvement.
Base-case sees varying high-prio queue delay between 2.23ms to 2.35ms

Signed-off-by: David S. Miller <davem@davemloft.net>
  • Loading branch information
Jesper Dangaard Brouer authored and David S. Miller committed Oct 3, 2014
1 parent 5772e9a commit 808e7ac
Showing 1 changed file with 3 additions and 9 deletions.
12 changes: 3 additions & 9 deletions net/sched/sch_generic.c
Original file line number Diff line number Diff line change
Expand Up @@ -63,10 +63,6 @@ static struct sk_buff *try_bulk_dequeue_skb(struct Qdisc *q,
struct sk_buff *skb, *tail_skb = head_skb;

while (bytelimit > 0) {
/* For now, don't bulk dequeue GSO (or GSO segmented) pkts */
if (tail_skb->next || skb_is_gso(tail_skb))
break;

skb = q->dequeue(q);
if (!skb)
break;
Expand All @@ -76,11 +72,9 @@ static struct sk_buff *try_bulk_dequeue_skb(struct Qdisc *q,
if (!skb)
break;

/* "skb" can be a skb list after validate call above
* (GSO segmented), but it is okay to append it to
* current tail_skb->next, because next round will exit
* in-case "tail_skb->next" is a skb list.
*/
while (tail_skb->next) /* GSO list goto tail */
tail_skb = tail_skb->next;

tail_skb->next = skb;
tail_skb = skb;
}
Expand Down

0 comments on commit 808e7ac

Please sign in to comment.