Skip to content

Commit

Permalink
remoteproc: Fall back to using parent memory pool if no dedicated ava…
Browse files Browse the repository at this point in the history
…ilable

In some cases, like with OMAP remoteproc, we are not creating dedicated
memory pool for the virtio device. Instead, we use the same memory pool
for all shared memories. The current virtio memory pool handling forces
a split between these two, as a separate device is created for it,
causing memory to be allocated from bad location if the dedicated pool
is not available. Fix this by falling back to using the parent device
memory pool if dedicated is not available.

Cc: stable@vger.kernel.org
Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Acked-by: Arnaud Pouliquen <arnaud.pouliquen@st.com>
Fixes: 086d087 ("remoteproc: create vdev subdevice with specific dma memory pool")
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Suman Anna <s-anna@ti.com>
Link: https://lore.kernel.org/r/20200420160600.10467-2-s-anna@ti.com
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
  • Loading branch information
Tero Kristo authored and Bjorn Andersson committed May 12, 2020
1 parent 529798b commit db9178a
Showing 1 changed file with 12 additions and 0 deletions.
12 changes: 12 additions & 0 deletions drivers/remoteproc/remoteproc_virtio.c
Original file line number Diff line number Diff line change
Expand Up @@ -375,6 +375,18 @@ int rproc_add_virtio_dev(struct rproc_vdev *rvdev, int id)
goto out;
}
}
} else {
struct device_node *np = rproc->dev.parent->of_node;

/*
* If we don't have dedicated buffer, just attempt to re-assign
* the reserved memory from our parent. A default memory-region
* at index 0 from the parent's memory-regions is assigned for
* the rvdev dev to allocate from. Failure is non-critical and
* the allocations will fall back to global pools, so don't
* check return value either.
*/
of_reserved_mem_device_init_by_idx(dev, np, 0);
}

/* Allocate virtio device */
Expand Down

0 comments on commit db9178a

Please sign in to comment.