[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Does xen-4.2.0 support VGA passthrough with the virtual machine created by xl command?



On Nov 14,  3:40pm, "Dr. Greg Wettstein" wrote:
} Subject: Re: [Xen-devel] Does xen-4.2.0 support VGA passthrough with the v

Good morning, hope the day is going well for everyone.

> On Nov 13, 10:02am, Ian Campbell wrote:
> } Subject: Re: [Xen-devel] Does xen-4.2.0 support VGA passthrough with the v
> 
> Good afternoon, hope the week is going well for everyone.
> 
> > On Tue, 2012-11-13 at 06:30 +0000, Qian Hu wrote:
> > 
> > > With spice tool, I have to create a VM by xl command, and now I am
> > > wondering if it supports VGA passghrouth?
> 
> > This list is for the development of Xen. You';d probably have more
> > luck with these sorts of support requests on the xen-users list.
> 
> That would normally be the case but I'm suspicious there are issues
> with VGA passthrough in 4.2.0.

I just wanted to follow up to the list on the status of passthrough
issues.

We reverted our test machine back to the 2.6.32.45 kernel which we had
been using in production.  That kernel was based on Jeremy's GIT
tree.  Using xm and the updated ATI patches which I referenced in my
original mail passthrough works as it should.

Passthrough does not work with xl.  Windows started but fell into its
text mode rescue screen and registered a crash dump.  It flashed the
screen back and forth between a stipled blue/grey and totally black
screen a few times and then locked the physical machine up solidly.

On the next boot I thought about it but declined to register the crash
dump with Microsoft.... :-)

We then went back and tested the 3.4.18 kernel and with both xm and xl
the guest faults on the first attempt to do an I/O port access.  All
factors (windows image, hardware, xen guest config) are held identical
so the difference would seem to be linked to the PCI passthrough
implementation between the two kernels.  I've copied Konrad on the
note since he would seem to be the person most familiar with this
area.

I'm including below a diff between a successful qemu-dm passthrough
session (2.6.32.45) and an unsuccessful session (3.4.18).  It would
appear 3.4.18 is getting the both the I/O port and memory ranges
wrong.

Let me know if I can forward any additional information or run any
additional tests.

Have a good weekend.

Greg

qemu-dm log diff: ---------------------------------------------------------
1c1
< domid: 3
---
> domid: 1
3,5c3,5
< Watching /local/domain/0/device-model/3/logdirty/cmd
< Watching /local/domain/0/device-model/3/command
< Watching /local/domain/3/cpu
---
> Watching /local/domain/0/device-model/1/logdirty/cmd
> Watching /local/domain/0/device-model/1/command
> Watching /local/domain/1/cpu
9c9
< Guest uuid = 7fcefb13-d1ef-105b-e38c-1e1454411e80
---
> Guest uuid = eab6bbbb-4819-b970-a83c-03288a1541ad
14c14
< xs_read(/local/domain/0/device-model/3/xen_extended_power_mgmt): read error
---
> xs_read(/local/domain/0/device-model/1/xen_extended_power_mgmt): read error
18,20c18,20
< xs_read(/local/domain/3/log-throttling): read error
< qemu: ignoring not-understood drive `/local/domain/3/log-throttling'
< medium change watch on `/local/domain/3/log-throttling' - unknown device, 
ignored
---
> xs_read(/local/domain/1/log-throttling): read error
> qemu: ignoring not-understood drive `/local/domain/1/log-throttling'
> medium change watch on `/local/domain/1/log-throttling' - unknown device, 
> ignored
69,106c69,77
< pt_iomem_map: e_phys=f1020000 maddr=c1a00000 type=0 len=131072 index=2 
first_map=1
< pt_iomem_map: e_phys=f1060000 maddr=c1b22000 type=0 len=4096 index=0 
first_map=1
< pt_ioport_map: e_phys=c600 pio_base=3000 len=256 index=4 first_map=1
< pt_ioport_map: e_phys=c600 pio_base=3000 len=256 index=4 first_map=0
< ati_gfx_init: guest_pio_bar = 0xc600, host_pio_bar = 0x3000, pio_size=0x100 
guest_mmio_bar1=0xe0000000, guest_mmio_bar2=0x0
< platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw 
state.
< platform_fixed_ioport: changed ro/rw state of ROM memory area. now is ro 
state.
< pt_pci_read_config: [00:06:0] Error: Failed to read register with invalid 
access size alignment. [Offset:0eh][Length:4]
< pt_pci_read_config: [00:06:0] Error: Failed to read register with invalid 
access size alignment. [Offset:0eh][Length:4]
< pt_pci_read_config: [00:06:0] Error: Failed to read register with invalid 
access size alignment. [Offset:0eh][Length:4]
< pt_pci_read_config: [00:06:0] Error: Failed to read register with invalid 
access size alignment. [Offset:0eh][Length:4]
< pt_pci_read_config: [00:06:0] Error: Failed to read register with invalid 
access size alignment. [Offset:0eh][Length:4]
< pt_pci_read_config: [00:06:0] Error: Failed to read register with invalid 
access size alignment. [Offset:0eh][Length:4]
< pt_pci_read_config: [00:06:0] Error: Failed to read register with invalid 
access size alignment. [Offset:0eh][Length:4]
< pt_iomem_map: e_phys=ffffffff maddr=b0000000 type=8 len=268435456 index=0 
first_map=0
< pt_iomem_map: e_phys=ffffffff maddr=c1a00000 type=0 len=131072 index=2 
first_map=0
< pt_iomem_map: e_phys=e0000000 maddr=b0000000 type=8 len=268435456 index=0 
first_map=0
< pt_iomem_map: e_phys=f1020000 maddr=c1a00000 type=0 len=131072 index=2 
first_map=0
< pt_ioport_map: e_phys=c600 pio_base=3000 len=256 index=4 first_map=0
< pt_iomem_map: e_phys=ffffffff maddr=c1b22000 type=0 len=4096 index=0 
first_map=0
< pt_iomem_map: e_phys=f1060000 maddr=c1b22000 type=0 len=4096 index=0 
first_map=0
< pt_iomem_map: e_phys=ffffffff maddr=b0000000 type=8 len=268435456 index=0 
first_map=0
< pt_iomem_map: e_phys=ffffffff maddr=c1a00000 type=0 len=131072 index=2 
first_map=0
< pt_iomem_map: e_phys=e0000000 maddr=b0000000 type=8 len=268435456 index=0 
first_map=0
< pt_iomem_map: e_phys=f1020000 maddr=c1a00000 type=0 len=131072 index=2 
first_map=0
< pt_ioport_map: e_phys=c600 pio_base=3000 len=256 index=4 first_map=0
< pt_ioport_map: e_phys=c600 pio_base=3000 len=256 index=4 first_map=0
< pt_msgctrl_reg_write: guest enabling MSI, disable MSI-INTx translation
< pci_intx: intx=1
< pt_msi_disable: Unmap msi with pirq 37
< pt_msgctrl_reg_write: setup msi for dev 30
< pt_msi_setup: msi mapped with pirq 37
< pt_msi_update: Update msi with pirq 37 gvec b0 gflags 0
< pt_iomem_map: e_phys=ffffffff maddr=c1b22000 type=0 len=4096 index=0 
first_map=0
< pt_iomem_map: e_phys=f1060000 maddr=c1b22000 type=0 len=4096 index=0 
first_map=0
< pt_iomem_map: e_phys=ffffffff maddr=c1b22000 type=0 len=4096 index=0 
first_map=0
< shutdown requested in cpu_handle_ioreq
< Issued domain 3 poweroff
---
> pt_iomem_map: e_phys=f1000000 maddr=c1a00000 type=0 len=131072 index=2 
> first_map=1
> pt_iomem_map: e_phys=f1040000 maddr=c1b22000 type=0 len=4096 index=0 
> first_map=1
> pt_ioport_map: e_phys=c700 pio_base=3000 len=256 index=4 first_map=1
> pt_ioport_map: e_phys=c700 pio_base=3000 len=256 index=4 first_map=0
> ati_gfx_init: guest_pio_bar = 0xc700, host_pio_bar = 0x3000, pio_size=0x100 
> guest_mmio_bar1=0xe0000000, guest_mmio_bar2=0x0
> ati_io_regs_read: Requested read of c74c/51020, mapped: 304c/12364
> ati_hw_in: port I/O: 304c, base: 3000, size: 100
> ati_hw_in: ioperm successful
> ati_hw_in: Read: 0
---------------------------------------------------------------------------

}-- End of excerpt from "Dr. Greg Wettstein"

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@xxxxxxxxxxxx
------------------------------------------------------------------------------
"Boy, it must not take much to make a phone work.  Looking at
 everthing else here it must be the same way with the INTERNET."
                                -- Francis 'Fritz' Wettstein

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.