WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [VTD][patch 0/6] HVM device assignment using vt-d

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] [VTD][patch 0/6] HVM device assignment using vt-d
From: "Kay, Allen M" <allen.m.kay@xxxxxxxxx>
Date: Fri, 6 Apr 2007 11:21:29 -0700
Delivery-date: Fri, 06 Apr 2007 11:20:19 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acd4eGURLbEIHfzsTQiu+VGB8xN3KQ==
Thread-topic: [VTD][patch 0/6] HVM device assignment using vt-d
Keir,

Following 6 patches are for enabling PCI device assignment to a HVM
domain
using vt-d.  We have done no-harm-done testing for PV and non-vtd HVM
guests on
32PAE, x86_64 and IPF.  See below for details of the patch.

Allen

----------------------

1) patch description: (applies cleanly to cs #14753)

vtd1.patch:
    - vt-d specific code + header file changes
    - not much common code interaction

vtd2.patch:
    - PCI config virtualization in QEMU + control panel/libxc changes
    - pass pci parameter to qemu, pci config change, make hypercalls
described in (5)
    - not much common code interaction

vtd3.patch:
    - domctl hypercall changes
    - impelments hypercalls described in (5)
    - some common code interaction

vtd4.patch:
    - io port handling
    - some common code interaction

vtd5.patch:
    - interrupt handling
    - heavy common code interaction

vtd6.patch:
    - mmio handling
    - heavy common code interaction in multi.c and page_alloc.c

2) environment tested:

Assigned PCIe E1000 add-on card to 32-bit FC5 on 64-bit Xen.  Informal
"scp" test
shows 200+Mbps - similar to native performance on my system.

3) how to run

- Use same syntax as PV driver domain method to "hide" and assign PCI
device
    - use pciback.hid=(02:00.0) to "hide" device from dom0
    - use pci = [ '02:00.00' ] in /etc/xen/hvm.conf to assign device to
HVM domain
    - set acpi and apic to 0 in hvm.conf as current patch only works
with PIC
    - grub.conf: use "ioapic_ack=old" for /boot/xen.gz
      (io_apic.c contains code for avoiding global interrupt problem)

4) description of hvm PCI device assignment design:

- pci config virtualization
  - Control panel and qemu changed to pass assigned PCI devices to qemu.
  - A new file ioemu/hw/dpci.c reads assigned devices PCI conf and
constructs a
    new virtual device and attaches to the guest PCI bus.
  - PCI read/write functions are similar to other virtual devices.
Except
    write function intercepts writes to COMMAND register and do actual
    hardware writes.

- interrupt virtualization
  - Currently only works for ACPI/APIC mode
  - dpci.c makes a hypercall to tell xen device/intx on vPCI
  - In do_IRQ_guest(), when Xen determines a interrupt belongs to a
device
    owned by HVM domain, it injects guest IRQ to the domain
  - Revert back to ioapic_ack=old to allow for IRQ sharing amongst
guests.
  - Implemented new method in io_apic.c to avoid global interrupt issue.

- mmio
  - When guest BIOS (i.e hvmloader) or OS changes PCI BAR, PCI config
write
    function in qemu makes a hypercall to instruct Xen to construct p2m
mapping.
  - shadow page table fault handler have been modified to allow memory
above
    max_pages to be mapped.

- ioport
  - Xen intercepts guest io port accesses
  - translates guest io port to machine io port
  - does machine port access on behalf of guest

5) new hypercalls

int xc_assign_device(int xc_handle,
                     uint32_t domain_id,
                     uint32_t machine_bdf);

int xc_domain_ioport_mapping(int xc_handle,
                             uint32_t domid,
                             uint32_t first_gport,
                             uint32_t first_mport,
                             uint32_t nr_ports,
                             uint32_t add_mapping);

int xc_irq_mapping(int xc_handle,
                   uint32_t domain_id,
                   uint32_t method,
                   uint32_t machine_irq,
                   uint32_t device,
                   uint32_t intx,
                   uint32_t add_mapping);

int xc_domain_memory_mapping(int xc_handle,
                             uint32_t domid,
                             unsigned long first_gfn,
                             unsigned long first_mfn,
                             unsigned long nr_mfns,
                             uint32_t add_mapping);

6) interface to common code: 

Calls to following common code interfaces are qualified by
iommu_found() and device_assigned() macros.

int iommu_setup(void);
int iommu_domain_init(struct domain *d);
int assign_device(struct domain *d, u8 bus, u8 devfn);
int release_devices(struct vcpu *v);
int hvm_do_IRQ_dpci(struct domain *d, unsigned int irq);
int dpci_ioport_intercept(ioreq_t *p, int type);

int iommu_page_mapping(
    struct domain *domain, dma_addr_t iova,
    void *hpa, size_t size, int prot);

int iommu_page_unmapping(
    struct domain *domain, dma_addr_t iova, size_t size);

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>