[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] GPU passthrough issue when VM is configured with 4G memory



On Wed, Mar 06, 2013 at 04:04:39PM +0200, Pasi Kärkkäinen wrote:
> On Wed, Mar 06, 2013 at 12:43:09PM +0000, George Dunlap wrote:
> > On 06/03/13 11:38, Hanweidong wrote:
> > >> -----Original Message-----
> > >> From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of George
> > >> Dunlap
> > >> Sent: 2013??3??5?? 20:59
> > >> To: Gonglei (Arei)
> > >> Cc: xen-devel@xxxxxxxxxxxxx; Yangxiaowei; Yanqiangjun; Luonengjun;
> > >> Wangzhenguo; Hanweidong
> > >> Subject: Re: [Xen-devel] GPU passthrough issue when VM is configured
> > >> with 4G memory
> > >>
> > >> On Mon, Mar 4, 2013 at 8:10 AM, Gonglei (Arei) <arei.gonglei@xxxxxxxxxx>
> > >> wrote:
> > >>> Hi,all
> > >>>
> > >>> I have tried to passthrough GPU card(Nvidia quadro 4000) on the
> > >> latest Xen
> > >>> unstable version (QEMU is using Qemu-upsteam-unstable, not
> > >> traditional
> > >>> Qemu). This issue as below:
> > >>>
> > >>>        Windows7 64-bit guest will blue screen when GPU passthrough
> > >> configure
> > >>> 4g memory,blue screen code is 50, and SUSE 11 64-bit guest will
> > >> always stay
> > >>> at the grub screen.  I noticed that it will relocate RAM that
> > >> overlaps PCI
> > >>> space in pci_setup()(tools/hvmloader/pci.c). If VM memory is
> > >> configured with
> > >>> 3G, it won't cause relocate RAM that overlaps PCI space in
> > >> pci_setup(), and
> > >>> GPU pass-through is no problem. So it seems this issue is related to
> > >>> "relocate RAM" in pci_setup().
> > >> So one issue XenServer found with passing through GPUs is that there
> > >> are bugs in some PCI bridges that completely break VT-d.  The issue
> > >> was that if the *guest* physical address space overlapped the *host*
> > >> physical address of a different device, that the PCI bridges would
> > >> send traffic from the passed-through card intended for the guest to
> > >> another card instead.  The work-around was to make the hole in the
> > >> guest MMIO space the same size as the host MMIO hole.  I'm not sure if
> > >> that made it upstream or not -- let me check...
> > >>
> > > Hi George,
> > >
> > > Could you post your patch and let us have a try with it? Thanks!
> > 
> > So the patch got checked in, but there still may be some more work if
> > you want to use it. :-)
> > 
> > The patch modifies xc_hvm_build_args structure to include a field called
> > "mmio_size". If this is set to zero, it will default to
> > HVM_BELOW_4G_MMIO_LENGTH; otherwise, it will be the size of the default
> > MMIO hole set during the build process. The guest BIOS may modify this
> > at boot time to make it bigger, but it doesn't make it smaller.
> > 
> > Since this was designed for xapi, however, which calls libxc directly,
> > we didn't add any options to xend / xl / libxl to set this option.
> > 
> > The easiest way to test it probably is just to hard-code
> > HVM_BELOW_4G_MMIO_LENGTH to a new value (from the description, setting
> > it to 1GiB should be sufficient).
> > 
> > Then if you want to use it in production, you probably want to either:
> > 1. Try it with the latest version of XCP (which I think has an option
> > you can set)
> > 2. Implement a config option for xl that allows you to set the MMIO hole
> > size.
> > 
> > #2 should be a relatively straightforward matter of "plumbing", and
> > would be a welcome contribution. :-)
> > 
> > If you do implement #2, it might be nice to have an option of
> > "mmio_hole_size=host", which will set the guest mmio hole to the same
> > size as the host. That's what we implemented for XenServer, to make sure
> > there would never be any collisions.
> > 
> 
> does the e820_host= option affect this? 

No. That is for PV guests only.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.