Skip to content

Commit

Permalink
Merge branch 'ibmvnic-Bug-fixes-for-queue-descriptor-processing'
Browse files Browse the repository at this point in the history
Thomas Falcon says:

====================
ibmvnic: Bug fixes for queue descriptor processing

This series resolves a few issues in the ibmvnic driver's
RX buffer and TX completion processing. The first patch
includes memory barriers to synchronize queue descriptor
reads. The second patch fixes a memory leak that could
occur if the device returns a TX completion with an error
code in the descriptor, in which case the respective socket
buffer and other relevant data structures may not be freed
or updated properly.

v3: Correct length of Fixes tags, requested by Jakub Kicinski

v2: Provide more detailed comments explaining specifically what
    reads are being ordered, suggested by Michael Ellerman
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
  • Loading branch information
David S. Miller committed Dec 1, 2020
2 parents 237f977 + ba246c1 commit de7b3f8
Showing 1 changed file with 19 additions and 3 deletions.
22 changes: 19 additions & 3 deletions drivers/net/ethernet/ibm/ibmvnic.c
Original file line number Diff line number Diff line change
Expand Up @@ -2404,6 +2404,12 @@ static int ibmvnic_poll(struct napi_struct *napi, int budget)

if (!pending_scrq(adapter, adapter->rx_scrq[scrq_num]))
break;
/* The queue entry at the current index is peeked at above
* to determine that there is a valid descriptor awaiting
* processing. We want to be sure that the current slot
* holds a valid descriptor before reading its contents.
*/
dma_rmb();
next = ibmvnic_next_scrq(adapter, adapter->rx_scrq[scrq_num]);
rx_buff =
(struct ibmvnic_rx_buff *)be64_to_cpu(next->
Expand Down Expand Up @@ -3113,13 +3119,18 @@ static int ibmvnic_complete_tx(struct ibmvnic_adapter *adapter,
unsigned int pool = scrq->pool_index;
int num_entries = 0;

/* The queue entry at the current index is peeked at above
* to determine that there is a valid descriptor awaiting
* processing. We want to be sure that the current slot
* holds a valid descriptor before reading its contents.
*/
dma_rmb();

next = ibmvnic_next_scrq(adapter, scrq);
for (i = 0; i < next->tx_comp.num_comps; i++) {
if (next->tx_comp.rcs[i]) {
if (next->tx_comp.rcs[i])
dev_err(dev, "tx error %x\n",
next->tx_comp.rcs[i]);
continue;
}
index = be32_to_cpu(next->tx_comp.correlators[i]);
if (index & IBMVNIC_TSO_POOL_MASK) {
tx_pool = &adapter->tso_pool[pool];
Expand Down Expand Up @@ -3513,6 +3524,11 @@ static union sub_crq *ibmvnic_next_scrq(struct ibmvnic_adapter *adapter,
}
spin_unlock_irqrestore(&scrq->lock, flags);

/* Ensure that the entire buffer descriptor has been
* loaded before reading its contents
*/
dma_rmb();

return entry;
}

Expand Down

0 comments on commit de7b3f8

Please sign in to comment.