WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH 0/7] IOMMU, vtd and iotlb flush rework (v6)

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] [PATCH 0/7] IOMMU, vtd and iotlb flush rework (v6)
From: Jean Guyader <jean.guyader@xxxxxxxxxxxxx>
Date: Thu, 10 Nov 2011 11:35:24 +0000
Cc: keir@xxxxxxx, allen.m.kay@xxxxxxxxx, tim@xxxxxxx, Jean Guyader <jean.guyader@xxxxxxxxxxxxx>, JBeulich@xxxxxxxx
Delivery-date: Thu, 10 Nov 2011 03:36:21 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
In one of my previous email I detailed a bug I was seeing when passing
through a Intel GPU on a guest that has more that 4G or RAM.

Allen suggested that I go for the Plan B but after a discussion with Tim
we agreed that Plan B was way to disruptive in term of code change.

This patch series implements Plan A.

http://xen.1045712.n5.nabble.com/VTD-Intel-iommu-IOTLB-flush-really-slow-td4952866.html

Changes between v5 and v6:
        - Rework the patch queue to make it more readable.
        - Modify xatp in place in xenmem_add_to_physmap
        - Only check for preemption if we are not at the last iteration
        - Copy xatp guest handler back to the guest only in case of continuation
        - Add continuation only when dealing with the new xenmem space
          (XENMAPSPACE_gmfn_range).

Changes between v4 and v5:
        - Fix hypercall continuation for add_to_physmap in compat mode.

Changes between v3 and v4:
        - Move the loop for gmfn_range from arch_memory_op to 
xenmem_add_to_physmap.
        - Add a comment to comment to explain the purpose of 
iommu_dont_flush_iotlb.

Changes between v2 and v3:
        - Check for the presence iotlb_flush_all callback before calling it.

Changes between v1 and v2:
        - Move size in struct xen_add_to_physmap in padding between .domid and 
.space.
        - Store iommu_dont_flush per cpu
        - Change the code in hvmloader to relocate by batch of 64K, .size is 
now 16 bits.


Jean Guyader (7):
  vtd: Refactor iotlb flush code
  iommu: Introduce iommu_flush and iommu_flush_all.
  add_to_physmap: Move the code for XENMEM_add_to_physmap
  mm: xenmem_add_to_physmap now takes a pointer on xatp
  mm: New XENMEM space, XENMAPSPACE_gmfn_range
  hvmloader: Change memory relocation loop when overlap with PCI hole
  Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary
    iotlb flush

 tools/firmware/hvmloader/pci.c      |   20 ++-
 xen/arch/x86/mm.c                   |  232 ++++++++++++++++++++++-------------
 xen/arch/x86/x86_64/compat/mm.c     |   12 ++
 xen/drivers/passthrough/iommu.c     |   25 ++++
 xen/drivers/passthrough/vtd/iommu.c |  100 +++++++++-------
 xen/include/public/memory.h         |    4 +
 xen/include/xen/iommu.h             |   17 +++
 7 files changed, 278 insertions(+), 132 deletions(-)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel