Skip to content

Commit

Permalink
[PKT_SCHED]: (G)RED: Introduce hard dropping
Browse files Browse the repository at this point in the history
Introduces a new flag TC_RED_HARDDROP which specifies that if ECN
marking is enabled packets should still be dropped once the
average queue length exceeds the maximum threshold.

This _may_ help to avoid global synchronisation during small
bursts of peers advertising but not caring about ECN. Use this
option very carefully, it does more harm than good if
(qth_max - qth_min) does not cover at least two average burst
cycles.

The difference to the current behaviour, in which we'd run into
the hard queue limit, is that due to the low pass filter of RED
short bursts are less likely to cause a global synchronisation.

Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
  • Loading branch information
Thomas Graf authored and Thomas Graf committed Nov 5, 2005
1 parent b38c7ee commit bdc450a
Show file tree
Hide file tree
Showing 3 changed files with 16 additions and 2 deletions.
2 changes: 2 additions & 0 deletions include/linux/pkt_sched.h
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,7 @@ struct tc_fifo_qopt
/* PRIO section */

#define TCQ_PRIO_BANDS 16
#define TCQ_MIN_PRIO_BANDS 2

struct tc_prio_qopt
{
Expand Down Expand Up @@ -169,6 +170,7 @@ struct tc_red_qopt
unsigned char Scell_log; /* cell size for idle damping */
unsigned char flags;
#define TC_RED_ECN 1
#define TC_RED_HARDDROP 2
};

struct tc_red_xstats
Expand Down
8 changes: 7 additions & 1 deletion net/sched/sch_gred.c
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,11 @@ static inline int gred_use_ecn(struct gred_sched *t)
return t->red_flags & TC_RED_ECN;
}

static inline int gred_use_harddrop(struct gred_sched *t)
{
return t->red_flags & TC_RED_HARDDROP;
}

static int gred_enqueue(struct sk_buff *skb, struct Qdisc* sch)
{
struct gred_sched_data *q=NULL;
Expand Down Expand Up @@ -214,7 +219,8 @@ static int gred_enqueue(struct sk_buff *skb, struct Qdisc* sch)

case RED_HARD_MARK:
sch->qstats.overlimits++;
if (!gred_use_ecn(t) || !INET_ECN_set_ce(skb)) {
if (gred_use_harddrop(t) || !gred_use_ecn(t) ||
!INET_ECN_set_ce(skb)) {
q->stats.forced_drop++;
goto congestion_drop;
}
Expand Down
8 changes: 7 additions & 1 deletion net/sched/sch_red.c
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,11 @@ static inline int red_use_ecn(struct red_sched_data *q)
return q->flags & TC_RED_ECN;
}

static inline int red_use_harddrop(struct red_sched_data *q)
{
return q->flags & TC_RED_HARDDROP;
}

static int red_enqueue(struct sk_buff *skb, struct Qdisc* sch)
{
struct red_sched_data *q = qdisc_priv(sch);
Expand All @@ -76,7 +81,8 @@ static int red_enqueue(struct sk_buff *skb, struct Qdisc* sch)

case RED_HARD_MARK:
sch->qstats.overlimits++;
if (!red_use_ecn(q) || !INET_ECN_set_ce(skb)) {
if (red_use_harddrop(q) || !red_use_ecn(q) ||
!INET_ECN_set_ce(skb)) {
q->stats.forced_drop++;
goto congestion_drop;
}
Expand Down

0 comments on commit bdc450a

Please sign in to comment.