Skip to content

Commit

Permalink
DMA, CMA: support alignment constraint on CMA region
Browse files Browse the repository at this point in the history
PPC KVM's CMA area management needs alignment constraint on CMA region.
So support it to prepare generalization of CMA area management
functionality.

Additionally, add some comments which tell us why alignment constraint
is needed on CMA region.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Gleb Natapov <gleb@kernel.org>
Acked-by: Marek Szyprowski <m.szyprowski@samsung.com>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
Joonsoo Kim authored and Linus Torvalds committed Aug 7, 2014
1 parent 3162bbd commit a15bc0b
Showing 1 changed file with 18 additions and 8 deletions.
26 changes: 18 additions & 8 deletions drivers/base/dma-contiguous.c
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@
#include <linux/swap.h>
#include <linux/mm_types.h>
#include <linux/dma-contiguous.h>
#include <linux/log2.h>

struct cma {
unsigned long base_pfn;
Expand Down Expand Up @@ -215,17 +216,16 @@ core_initcall(cma_init_reserved_areas);

static int __init __dma_contiguous_reserve_area(phys_addr_t size,
phys_addr_t base, phys_addr_t limit,
phys_addr_t alignment,
struct cma **res_cma, bool fixed)
{
struct cma *cma = &cma_areas[cma_area_count];
phys_addr_t alignment;
int ret = 0;

pr_debug("%s(size %lx, base %08lx, limit %08lx)\n", __func__,
(unsigned long)size, (unsigned long)base,
(unsigned long)limit);
pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n",
__func__, (unsigned long)size, (unsigned long)base,
(unsigned long)limit, (unsigned long)alignment);

/* Sanity checks */
if (cma_area_count == ARRAY_SIZE(cma_areas)) {
pr_err("Not enough slots for CMA reserved regions!\n");
return -ENOSPC;
Expand All @@ -234,8 +234,17 @@ static int __init __dma_contiguous_reserve_area(phys_addr_t size,
if (!size)
return -EINVAL;

/* Sanitise input arguments */
alignment = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
if (alignment && !is_power_of_2(alignment))
return -EINVAL;

/*
* Sanitise input arguments.
* Pages both ends in CMA area could be merged into adjacent unmovable
* migratetype page by page allocator's buddy algorithm. In the case,
* you couldn't get a contiguous memory, which is not what we want.
*/
alignment = max(alignment,
(phys_addr_t)PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order));
base = ALIGN(base, alignment);
size = ALIGN(size, alignment);
limit &= ~(alignment - 1);
Expand Down Expand Up @@ -299,7 +308,8 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
{
int ret;

ret = __dma_contiguous_reserve_area(size, base, limit, res_cma, fixed);
ret = __dma_contiguous_reserve_area(size, base, limit, 0,
res_cma, fixed);
if (ret)
return ret;

Expand Down

0 comments on commit a15bc0b

Please sign in to comment.