Skip to content

Commit

Permalink
dm cache policy mq: simplify ability to promote sequential IO to the …
Browse files Browse the repository at this point in the history
…cache

Before, if the user wanted sequential IO to be promoted to the cache
they'd have to set sequential_threshold to some nebulous large value.

Now, the user may easily disable sequential IO detection (and sequential
IO's implicit bypass of the cache) by setting sequential_threshold to 0.

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
  • Loading branch information
Mike Snitzer committed Nov 10, 2014
1 parent b155aa0 commit f1afb36
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 8 deletions.
16 changes: 11 additions & 5 deletions Documentation/device-mapper/cache-policies.txt
Original file line number Diff line number Diff line change
Expand Up @@ -47,16 +47,22 @@ Message and constructor argument pairs are:
'discard_promote_adjustment <value>'

The sequential threshold indicates the number of contiguous I/Os
required before a stream is treated as sequential. The random threshold
required before a stream is treated as sequential. Once a stream is
considered sequential it will bypass the cache. The random threshold
is the number of intervening non-contiguous I/Os that must be seen
before the stream is treated as random again.

The sequential and random thresholds default to 512 and 4 respectively.

Large, sequential ios are probably better left on the origin device
since spindles tend to have good bandwidth. The io_tracker counts
contiguous I/Os to try to spot when the io is in one of these sequential
modes.
Large, sequential I/Os are probably better left on the origin device
since spindles tend to have good sequential I/O bandwidth. The
io_tracker counts contiguous I/Os to try to spot when the I/O is in one
of these sequential modes. But there are use-cases for wanting to
promote sequential blocks to the cache (e.g. fast application startup).
If sequential threshold is set to 0 the sequential I/O detection is
disabled and sequential I/O will no longer implicitly bypass the cache.
Setting the random threshold to 0 does _not_ disable the random I/O
stream detection.

Internally the mq policy determines a promotion threshold. If the hit
count of a block not in the cache goes above this threshold it gets
Expand Down
7 changes: 4 additions & 3 deletions drivers/md/dm-cache-policy-mq.c
Original file line number Diff line number Diff line change
Expand Up @@ -865,7 +865,8 @@ static int map(struct mq_policy *mq, dm_oblock_t oblock,
if (e && in_cache(mq, e))
r = cache_entry_found(mq, e, result);

else if (iot_pattern(&mq->tracker) == PATTERN_SEQUENTIAL)
else if (mq->tracker.thresholds[PATTERN_SEQUENTIAL] &&
iot_pattern(&mq->tracker) == PATTERN_SEQUENTIAL)
result->op = POLICY_MISS;

else if (e)
Expand Down Expand Up @@ -1290,15 +1291,15 @@ static struct dm_cache_policy *mq_create(dm_cblock_t cache_size,

static struct dm_cache_policy_type mq_policy_type = {
.name = "mq",
.version = {1, 2, 0},
.version = {1, 3, 0},
.hint_size = 4,
.owner = THIS_MODULE,
.create = mq_create
};

static struct dm_cache_policy_type default_policy_type = {
.name = "default",
.version = {1, 2, 0},
.version = {1, 3, 0},
.hint_size = 4,
.owner = THIS_MODULE,
.create = mq_create,
Expand Down

0 comments on commit f1afb36

Please sign in to comment.