This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Re: Intel GPU pass-through with > 3G

To: "Kay, Allen M" <allen.m.kay@xxxxxxxxx>
Subject: [Xen-devel] Re: Intel GPU pass-through with > 3G
From: Jean Guyader <jean.guyader@xxxxxxxxx>
Date: Thu, 11 Nov 2010 08:42:28 +0000
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 11 Nov 2010 00:43:27 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=s/5oQLKDeQ3ACbRO5PIKjosoTx+p+1u5YX6a/wv7kCA=; b=IwxYBnbnVSdZ75ON+FqWFQrcAsoxkP5RPb0UJdjxgIQQ5hACyno4RwP9pfn0oMox1T qzPU1AoXNDxoYV+v0xv8Xz9lQkF7k/e9d34azvqoulL3HCmjIANTcc7k7cO/HsZk9XKt LTy1tFuEFMTCUWUU10/7H6Rnv3NcDSf/w2MNQ=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=hBC1BGpRDsBFW3iifpyzVnu2N0df874YPPgKat+1JpTe+hLuh0ehEzjh1F3G5ttzoi Cj5MFumBbzaAF3FSYa0ZBFQKHnX2l9T049UXX/D8VMWxSqUmvda5gGCBXcAq0tuIqxrc IxaTUq+N5pyAAmP21fl7juBNS0kFilLwBWpR4=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <987664A83D2D224EAE907B061CE93D5301649EFDA2@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTimNJZXkHDhwQDhYr6oiQE_uXarktA9-AL4Hp9xn@xxxxxxxxxxxxxx> <987664A83D2D224EAE907B061CE93D5301649EFDA2@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
It's consistent. The reason is that the vt-d mapping already happened
once when xen allocate the guest memory.
So the relocation of the page for the pci hole end up to be the second mapping.


On 11 November 2010 00:04, Kay, Allen M <allen.m.kay@xxxxxxxxx> wrote:
> Jean,
> Do you see any boot time difference between passing through integrated 
> graphics for the very first time and the subsequent times?  Which platform 
> are you using?
> Allen
> -----Original Message-----
> From: Jean Guyader [mailto:jean.guyader@xxxxxxxxx]
> Sent: Wednesday, November 10, 2010 1:50 PM
> To: xen-devel@xxxxxxxxxxxxxxxxxxx
> Cc: Kay, Allen M
> Subject: Intel GPU pass-through with > 3G
> Hello,
> I'm passing through a graphic card to a guest that has more than 3G of
> RAM (4G to be precise in my case).
> What happen is that the VM creation is stuck in the process, so I put
> some tracing in the Xen code to see what
> was taking the time. I discovered that the guest was stuck in
> hvmloader inside this loop:
>   while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
>    {
>        struct xen_add_to_physmap xatp;
>        if ( hvm_info->high_mem_pgend == 0 )
>            hvm_info->high_mem_pgend = 1ull << (32 - PAGE_SHIFT);
>        xatp.domid = DOMID_SELF;
>        xatp.space = XENMAPSPACE_gmfn;
>        xatp.idx   = --hvm_info->low_mem_pgend;
>        xatp.gpfn  = hvm_info->high_mem_pgend++;
>        if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
>            BUG();
>    }
> This loop relocate the RAM on the top to leave so space for the PCI BARs.
> It's loop on each page so in my case it's quite a big loop because the
> GPU has a BAR of 256M.
> So the interesting is that the function add_to_physmap takes most of
> the time. I believe
> that what takes most part of it is the iommu iotlb flush that come
> with the iommu_map_pages
> or the iommu_unmap_page which are called when we manipulate the p2m table.
> In my case the iommu flush take a very long time (because of the intel
> gpu ?), about 10
> milliseconds. So if I'm patient enough my domain will start, about 10 minutes.
> A way to go will be to create a range interface to iommu_map_page
> iommu_unmap_page
> since iommu_flush are so expensive. Then some work need to be done to
> add a range interface
> to all the function between add_to_physmap and the p2m_set_entry which
> would be a big
> patch. I hope there is another way out of this problem.
> Thanks,
> Jean

Xen-devel mailing list