Skip to content

Commit

Permalink
iommu/vt-d: Avoid superfluous IOTLB tracking in lazy mode
Browse files Browse the repository at this point in the history
Intel IOMMU driver implements IOTLB flush queue with domain selective
or PASID selective invalidations. In this case there's no need to track
IOVA page range and sync IOTLBs, which may cause significant performance
hit.

This patch adds a check to avoid IOVA gather page and IOTLB sync for
the lazy path.

The performance difference on Sapphire Rapids 100Gb NIC is improved by
the following (as measured by iperf send):

w/o this fix~48 Gbits/s. with this fix ~54 Gbits/s

Cc: <stable@vger.kernel.org>
Fixes: 2a2b8ea ("iommu: Handle freelists when using deferred flushing in iommu drivers")
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Tested-by: Sanjay Kumar <sanjay.k.kumar@intel.com>
Signed-off-by: Sanjay Kumar <sanjay.k.kumar@intel.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Link: https://lore.kernel.org/r/20230209175330.1783556-1-jacob.jun.pan@linux.intel.com
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
  • Loading branch information
Jacob Pan authored and Joerg Roedel committed Feb 16, 2023
1 parent 60b1daa commit 16a75bb
Showing 1 changed file with 6 additions and 1 deletion.
7 changes: 6 additions & 1 deletion drivers/iommu/intel/iommu.c
Original file line number Diff line number Diff line change
Expand Up @@ -4348,7 +4348,12 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain,
if (dmar_domain->max_addr == iova + size)
dmar_domain->max_addr = iova;

iommu_iotlb_gather_add_page(domain, gather, iova, size);
/*
* We do not use page-selective IOTLB invalidation in flush queue,
* so there is no need to track page and sync iotlb.
*/
if (!iommu_iotlb_gather_queued(gather))
iommu_iotlb_gather_add_page(domain, gather, iova, size);

return size;
}
Expand Down

0 comments on commit 16a75bb

Please sign in to comment.