[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Move some of the PCI device manage/control into pciback?



xen-devel-bounces@xxxxxxxxxxxxxxxxxxx <> wrote:
> On 15/01/2009 10:17, "Shohei Fujiwara"
> <fujiwara-sxa@xxxxxxxxxxxxxxx> wrote:
> 
>> In case of HVM domain with stub domain, I'm considering direct access
>> from ioemu to configuration space.  We can achieve this by mapping the
>> subset of MMCFG to stub domain. This will improve the scalability of PCI
>> pass-through and reduce the responsibility of dom0.
>> 
>> My model is the following.
>> 
>>     1. PCI back driver resets the device and setups it.
>>     2. PCI back driver passes the responsibility of configuration
>>        space of device to ioemu.
>>     3. Ioemu reads/writes configuration space of the device,       
>> responding guest OS. 
>>     4. When ioemu exits, pci back driver gets the responsibility of
>>        configuration space of device.
>>     5. PCI back driver resets device (and put D3hot state if possible)
>> 
>> As you know, current xend reads/writes configuration space. If xend
>> doesn't reads/writes, the architecture becomes simpler.
>> 
>> What do you think about this?
> 
> I'd rather have all accesses mediated through pciback. I don't think PCI
> config accesses should be on any data path anyway, and you've already taken
> the hit of trapping to qemu in that case.

There is one exception: The mask bit for MSI/MSI-X. Maybe we need add some 
mechanism for HVM domain to mask/unmask the virtual interrupt directly, like 
what DomU did for evtchn. But that will be tricky.

Thanks
Yunhong Jiang

> 
> -- Keir
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.