[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Move some of the PCI device manage/control into pciback?



xen-devel-bounces@xxxxxxxxxxxxxxxxxxx <> wrote:
> On Thu, 15 Jan 2009 11:09:21 +0800
> "Cui, Dexuan" <dexuan.cui@xxxxxxxxx> wrote:
> 
>> 1) Now in Xen VT-d, the FLR related things (can the device(s) be
>> statically/dynamically assigned to a guest? how should the device(s)
>> be FLR-ed?) are done in xend. The diff of the python patch is ~700
>> lines.  We may consider moving these things to pciback.  Certainly,
>> with these things in pciback, I'm afraid we'll have less flexibility
>> -- a small adjustment (e.g., some people would like to relax the
>> co-assignment constraint) or a bug fix requires a reload of pciback
>> or a reboot of host (if pciback is built into Dom0 kernel). And we
>> have some other issues: a) moving all the python logic into the
>> pciback using C needs a big effort so maybe somebody doesn't like
>> the big number of the line of code; b) we may need to add an
>> interface between pciback and control panel so that xend can invoke
>> these FLR related functions of pciback

I'm still not sure if we really need such flexibility in production environment.

>> 
>> 2) Now the pci config space virtualizations of PV and HVM guests are
>> not the same and there are some duplicated codes in pciback and
>> ioemu. Now the ioemu of Dom0 accesses device config space via libpci
>> (the /sys); maybe ioemu can talk to pciback directly?  In the case
>> of stubdomain, looks the libpci is implemented via pcifront -- if
>> ioemu can talk to pciback directly, I think we can eliminate the
>> duplicated codes in ioemu and we'll have a consistency between PV
>> and HVM.

So you mean ioemu initiate xen_pci_op directly to pciback?

> 
> I agree with you that there are two similar codes in pciback and
> ioemu. But I'm not happy if the code is removed from ioemu.
> 
> In case of HVM domain with stub domain, I'm considering direct access
> from ioemu to configuration space.  We can achieve this by mapping the
> subset of MMCFG to stub domain. This will improve the
> scalability of PCI
> pass-through and reduce the responsibility of dom0.
> 
> My model is the following.
> 
>    1. PCI back driver resets the device and setups it.
>    2. PCI back driver passes the responsibility of configuration
>       space of device to ioemu.
>    3. Ioemu reads/writes configuration space of the device,      
> responding guest OS. 
>    4. When ioemu exits, pci back driver gets the responsibility of
>       configuration space of device.
>    5. PCI back driver resets device (and put D3hot state if possible)
> 
> As you know, current xend reads/writes configuration space. If xend
> doesn't reads/writes, the architecture becomes simpler.
> 
> What do you think about this?

Shohei, I think this model may have some issue. 
a) The stubdomain/qemu is not trustable, so user may use a fake stub domain and 
try to programe some sensitive config space (like MSI).
b) If there is no mmcfg support, to sync access to cf8/cfc will be difficult. 
So you mean we have different implementation for mmcfg/cf8 method?

> 
> Thanks,
> --
> Shohei Fujiwara
> 
>> And for the pci passthrough related hypercalls invoked by
>> the ioemu in the de-priviledged stubdomain, I think ioemu can ask
>> pciback to help to invoke the hypercall, but this needs us to add an
>> interface in pciback.
> 
>> All these things need us to re-architect the current codes. Will
>> this bring compatibility issues? I remember it's said Xen 3.4 will
>> be released in March; now it's the suitable time for us to consider the
>> changes? 
>> 
>> PS, in the long run -- how long? -- will ioemu be removed from Dom0
>> and stubdomain will be the only place for ioemu?
>> 
>> Any comment is appreciated!
>> 
>> Thanks,
>> -- Dexuan
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.