[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Graphics passthru for dual-GPU cards to two domU's



Thanks Xiantao,

I am using a Dual Xeon Tyan server here and VT-d is enabled.  I was
hoping someone might spot something special in the PCI listing for the
device that might explain why I haven't been able to get the second
GPU to start.
One difference between the primary and secondary GPU is that the
second GPU on the card does not have any VGA output capability.
However, we're not needing VGA output as we remote in to the guest.
This limitation hasn't stopped us from getting VMWare to successfully
dedicate the second GPU to it's on virtual machine.

Looking at the xen-unstable source tree I do see some significant
changes to VGA passthrough so I may try this and see if I have any
success.

Matt

On 20 February 2012 22:54, Zhang, Xiantao <xiantao.zhang@xxxxxxxxx> wrote:
> If the card has two  full-fledged  PCI-e graphics functions, I think you can 
> follow the link(http://wiki.xen.org/wiki/Xen_VGA_Passthrough) to do your 
> passthru work. It depends on Intel's VT-d or AMD's IOMMU technology, so 
> please make sure all required components are ready in your system.   I think 
> Xen's mailing list archive also has some detailed discussions about how to 
> passthru one graphics device to the guest and what's difference compared with 
> general PCIe device assignment.
> Xiantao
>
>> -----Original Message-----
>> From: xen-devel-bounces@xxxxxxxxxxxxx [mailto:xen-devel-
>> bounces@xxxxxxxxxxxxx] On Behalf Of Matthew Hook
>> Sent: Tuesday, February 21, 2012 12:59 PM
>> To: xen-devel@xxxxxxxxxxxxxxxxxxx
>> Subject: [Xen-devel] Graphics passthru for dual-GPU cards to two domU's
>>
>> Is it possible in Xen that with a dual GPU graphics card which appears as two
>> separately addressable PCI devices to pass each GPU to a separate guest?
>> Although I didn't think this was initially possible, VMWare supports it.
>> It's useful under xen for increasing virtual server density when providing
>> virtual desktops or similar.
>> Although not many Dual GPU cards are on the market at the moment, it
>> seems likely that Dual or Quad GPU cards will become the norm in the near
>> future.
>>
>> For example, I have a number of Dual GPU AMD HD Radeon 6990 card's.
>> I have successfully managed to passthru one of the GPU's on the card with
>> success.  However I get a BSOD on the second one.
>>
>> I'm wondering if compiling in the VGA BIOS may help?
>> If so, where is a good resource on how I could do that?
>>
>>
>> Cards show in lspci as follows:
>>
>> 12:00.0 Display controller: ATI Technologies Inc Device 671d
>> 12:00.1 Audio device: ATI Technologies Inc Device aa80
>> 13:00.0 VGA compatible controller: ATI Technologies Inc Device 671d
>> 13:00.1 Audio device: ATI Technologies Inc Device aa80
>>
>>
>> ATI Technologies Inc Device aa80
>>             
>>  +-07.0-[0000:0a-13]----00.0-[0000:0b-13]--+-04.0-[0000:10-13]----00.0-
>> [0000:11-13]--+-04.0-[0000:13]--+-00.0
>>  ATI Technologies Inc Device 671d
>>              |                                         |
>>                           |                 \-00.1  ATI Technologies
>> Inc Device aa80
>>              |                                         |
>>                           \-08.0-[0000:12]--+-00.0  ATI Technologies Inc 
>> Device 671d
>>              |                                         |
>>                                             \-00.1  ATI Technologies Inc 
>> Device aa80
>>
>>
>> More details on the devices themselves.
>>
>> 12:00.0 Display controller: ATI Technologies Inc Device 671d
>>         Subsystem: ATI Technologies Inc Device 1b2a
>>         Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop-
>> ParErr- Stepping- SERR- FastB2B- DisINTx-
>>         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
>> <TAbort- <MAbort- >SERR- <PERR- INTx-
>>         Interrupt: pin A routed to IRQ 30
>>         Region 0: Memory at 80000000 (64-bit, prefetchable) [disabled]
>> [size=256M]
>>         Region 2: Memory at fb6c0000 (64-bit, non-prefetchable) [disabled]
>> [size=128K]
>>         Region 4: I/O ports at 9000 [disabled] [size=256]
>>         Expansion ROM at fb6a0000 [disabled] [size=128K]
>>         Capabilities: [50] Power Management version 3
>>                 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA
>> PME(D0-,D1-,D2-,D3hot-,D3cold-)
>>                 Status: D0 PME-Enable- DSel=0 DScale=0 PME-
>>         Capabilities: [58] Express (v2) Legacy Endpoint, MSI 00
>>                 DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, 
>> L1
>> unlimited
>>                         ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
>>                 DevCtl: Report errors: Correctable- Non-Fatal- Fatal-
>> Unsupported-
>>                         RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
>>                         MaxPayload 128 bytes, MaxReadReq 512 bytes
>>                 DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+
>> AuxPwr- TransPend-
>>                 LnkCap: Port #8, Speed 2.5GT/s, Width x16, ASPM L0s L1, 
>> Latency L0
>> <64ns, L1 <1us
>>                         ClockPM- Suprise- LLActRep- BwNot-
>>                 LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- 
>> CommClk-
>>                         ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
>>                 LnkSta: Speed 2.5GT/s, Width x16, TrErr- Train-
>> SlotClk+ DLActive- BWMgmt- ABWMgmt-
>>         Capabilities: [a0] Message Signalled Interrupts: Mask- 64bit+
>> Queue=0/0 Enable-
>>                 Address: 0000000000000000  Data: 0000
>>         Capabilities: [100] Vendor Specific Information <?>
>>         Capabilities: [150] Advanced Error Reporting <?>
>>         Kernel driver in use: pciback
>>
>>
>> 13:00.0 VGA compatible controller: ATI Technologies Inc Device 671d
>>         Subsystem: ATI Technologies Inc Device 0b2a
>>         Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop-
>> ParErr- Stepping- SERR- FastB2B- DisINTx-
>>         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
>> <TAbort- <MAbort- >SERR- <PERR- INTx-
>>         Interrupt: pin A routed to IRQ 30
>>         Region 0: Memory at 90000000 (64-bit, prefetchable) [disabled]
>> [size=256M]
>>         Region 2: Memory at fb7c0000 (64-bit, non-prefetchable) [disabled]
>> [size=128K]
>>         Region 4: I/O ports at a000 [disabled] [size=256]
>>         Expansion ROM at fb7a0000 [disabled] [size=128K]
>>         Capabilities: [50] Power Management version 3
>>                 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA
>> PME(D0-,D1-,D2-,D3hot-,D3cold-)
>>                 Status: D0 PME-Enable- DSel=0 DScale=0 PME-
>>         Capabilities: [58] Express (v2) Legacy Endpoint, MSI 00
>>                 DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, 
>> L1
>> unlimited
>>                         ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
>>                 DevCtl: Report errors: Correctable- Non-Fatal- Fatal-
>> Unsupported-
>>                         RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
>>                         MaxPayload 128 bytes, MaxReadReq 512 bytes
>>                 DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+
>> AuxPwr- TransPend-
>>                 LnkCap: Port #4, Speed 2.5GT/s, Width x16, ASPM L0s L1, 
>> Latency L0
>> <64ns, L1 <1us
>>                         ClockPM- Suprise- LLActRep- BwNot-
>>                 LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- 
>> CommClk-
>>                         ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
>>                 LnkSta: Speed 2.5GT/s, Width x16, TrErr- Train-
>> SlotClk+ DLActive- BWMgmt- ABWMgmt-
>>         Capabilities: [a0] Message Signalled Interrupts: Mask- 64bit+
>> Queue=0/0 Enable-
>>                 Address: 0000000000000000  Data: 0000
>>         Capabilities: [100] Vendor Specific Information <?>
>>         Capabilities: [150] Advanced Error Reporting <?>
>>         Kernel driver in use: pciback
>>
>> Matt
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.