Skip to content

Commit

Permalink
net_sched: sfb: optimize enqueue on full queue
Browse files Browse the repository at this point in the history
In case SFB queue is full (hard limit reached), there is no point
spending time to compute hash and maximum qlen/p_mark.

We instead just early drop packet.

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
  • Loading branch information
Eric Dumazet authored and David S. Miller committed Aug 26, 2011
1 parent 18cf124 commit 363437f
Showing 1 changed file with 8 additions and 5 deletions.
13 changes: 8 additions & 5 deletions net/sched/sch_sfb.c
Original file line number Diff line number Diff line change
Expand Up @@ -287,6 +287,12 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch)
u32 r, slot, salt, sfbhash;
int ret = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;

if (unlikely(sch->q.qlen >= q->limit)) {
sch->qstats.overlimits++;
q->stats.queuedrop++;
goto drop;
}

if (q->rehash_interval > 0) {
unsigned long limit = q->rehash_time + q->rehash_interval;

Expand Down Expand Up @@ -332,12 +338,9 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch)
slot ^= 1;
sfb_skb_cb(skb)->hashes[slot] = 0;

if (unlikely(minqlen >= q->max || sch->q.qlen >= q->limit)) {
if (unlikely(minqlen >= q->max)) {
sch->qstats.overlimits++;
if (minqlen >= q->max)
q->stats.bucketdrop++;
else
q->stats.queuedrop++;
q->stats.bucketdrop++;
goto drop;
}

Expand Down

0 comments on commit 363437f

Please sign in to comment.