[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.2 Release Plan / TODO



On Thu, Mar 22, 2012 at 9:53 AM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> On Thu, 2012-03-22 at 09:35 +0000, George Dunlap wrote:
>> On Mon, Mar 19, 2012 at 10:57 AM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> 
>> wrote:
>> >      * xl compatibility with xm:
>> >              * feature parity wrt driver domain support (George Dunlap)
>> I just discovered (while playing with driver domains) that xl is
>> missing one bit of feature parity with xm for pci passthrough for PV
>> guests -- and that's the "pci quirk" config file support.  I'm going
>> to ask Intel if they have an interest in porting it over; I think it
>> should at least be a "nice-to-have", and it may be a low-level
>> blocker, as a lot of devices won't work passed through without it.
>
> This is the stuff in tools/python/xen/xend/server/pciquirk.py ?
>
> pciback in upstream doesn't mention "quirk" which suggests no support
> for the necessary sysfs node either?

Ah, interesting -- that's worth tracking down.  Maybe there's a better
way to deal with quirks?  Or maybe it just hasn't been upstreamed yet
(or perhaps even implemented in pvops?).  I'm using the Debian squeeze
2.6.32-5-xen-686 kernel.

> tools/examples/xend-pci-quirks.sxp  seems to only have a quirk for a
> single card?

Yes, well I could add two more cards just from experience w/ one of my
test boxen. :-)

> I don't think we want to implement an SXP parser for xl/libxl so if this
> is reimplemented I think a different format should be used.

Since we're using yajl anyway, JSON might not be a bad option.

Anyway, I'll ping the Intel guy who recently posted a patch to libxl_pci.c.

 -George

>
> Anyway, I'll put this onto the list.
>
> Ian
>
>>
>> >              * xl support for "rtc_timeoffset" and "localtime" (Lin
>> >                Ming, Patches posted)
>> >      * More formally deprecate xm/xend. Manpage patches already in
>> >        tree. Needs release noting and communication around -rc1 to
>> >        remind people to test xl.
>> >      * Domain 0 block attach & general hotplug when using qdisk backend
>> >        (need to start qemu as necessary etc) (Stefano S)
>> >      * file:// backend performance. qemu-xen-tradition's qdisk is quite
>> >        slow & blktap2 not available in upstream kernels. Need to
>> >        consider our options:
>> >              * qemu-xen's qdisk is thought to be well performing but
>> >                qemu-xen is not yet the default. Complexity arising from
>> >                splitting qemu-for-qdisk out from qemu-for-dm and
>> >                running N qemu's.
>> >              * potentially fully userspace blktap could be ready for
>> >                4.2
>> >              * use /dev/loop+blkback. This requires loop driver AIO and
>> >                O_DIRECT patches which are not (AFAIK) yet upstream.
>> >              * Leverage XCP's blktap2 DKMS work.
>> >              * Other ideas?
>> >      * Improved Hotplug script support (Roger Pau Monné, patches
>> >        posted)
>> >      * Block script support -- follows on from hotplug script (Roger
>> >        Pau Monné)
>> >
>> > hypervisor, nice to have:
>> >      * solid implementation of sharing/paging/mem-events (using work
>> >        queues) (Tim Deegan, Olaf Herring et al -- patches posted)
>> >              * "The last patch to use a waitqueue in
>> >                __get_gfn_type_access() from Tim works.  However, there
>> >                are a few users who call __get_gfn_type_access with the
>> >                domain_lock held. This part needs to be addressed in
>> >                some way."
>> >      * Sharing support for AMD (Tim, Andres).
>> >      * PoD performance improvements (George Dunlap)
>> >
>> > tools, nice to have:
>> >      * Configure/control paging via xl/libxl (Olaf Herring, lots of
>> >        discussion around interface, general consensus reached on what
>> >        it should look like)
>> >      * Upstream qemu feature patches:
>> >              * Upstream qemu PCI passthrough support (Anthony Perard,
>> >                patches sent)
>> >              * Upstream qemu save restore (Anthony Perard, Stefano
>> >                Stabellini, patches sent, waiting for upstream ack)
>> >      * Nested-virtualisation. Currently "experimental". Likely to
>> >        release that way.
>> >              * Nested SVM. Tested in a variety of configurations but
>> >                still some issues with the most important use case (w7
>> >                XP mode) [0]  (Christoph Egger)
>> >              * Nested VMX. Needs nested EPT to be genuinely useful.
>> >                Need more data on testedness etc (Intel)
>> >      * Initial xl support for Remus (memory checkpoint, blackholing)
>> >        (Shriram, patches posted, blocked behind qemu save restore
>> >        patches)
>> >      * xl compatibility with xm:
>> >              * xl support for autospawning vncviewer (vncviewer=1 or
>> >                otherwise) (Goncalo Gomes)
>> >              * support for vif "rate" parameter (Mathieu Gagné)
>> >
>> > [0] http://lists.xen.org/archives/html/xen-devel/2012-03/msg00883.html
>> >
>> >
>> > _______________________________________________
>> > Xen-devel mailing list
>> > Xen-devel@xxxxxxxxxxxxx
>> > http://lists.xen.org/xen-devel
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.