[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Xen 4.2 TODO List Update



This weeks update. Please send me corrections (especially
"done"). Lots of DONE this week, nice to see. We've had patches posted
for most of the blockers too AFAICT.

hypervisor, blockers:

      * round-up of the closing of the security hole in MSI-X
        passthrough (uniformly - i.e. even for Dom0 - disallowing write
        access to MSI-X table pages). (Jan Beulich -- more fixes
        required than first thought, patches posted)
      * domctls / sysctls set up to modify scheduler parameters, like
        the credit1 timeslice and schedule rate. (George Dunlap)
      * get the interface changes for sharing/paging/mem-events done and
        dusted so that 4.2 is a stable API that we hold to. (Tim Deegan,
        Andres Lagar-Cavilla et al)
              * mem event ring management DONE
              * sharing patches posted

tools, blockers:

      * libxl stable API -- we would like 4.2 to define a stable API
        which downstream's can start to rely on not changing. Aspects of
        this are:
              * event handling (Ian J, DONE)
              * drop libxl_device_model_info (move bits to build_info or
                elsewhere as appropriate). (Ian Campbell, patches
                posted, repost pending).
              * add libxl_defbool and generally try and arrange that
                memset(foo,0,...) requests the defaults (Ian Campbell,
                repost pending)
              * topologyinfo datastructure should be a list of tuples,
                not a tuple of lists. (Ian Campbell, patches posted,
                repost pending)
      * xl to use json for machine readable output instead of sexp by
        default (Ian Campbell, patch posted, repost pending)
      * xl support for vcpu pinning (Dario Faggioli, DONE)
      * xl feature parity with xend wrt driver domain support (George
        Dunlap)
      * Integrate qemu+seabios upstream into the build (Stefano, DONE).
        No change in default qemu for 4.2.
      * More formally deprecate xm/xend. Manpage patches already in
        tree. Needs release noting and communication around -rc1 to
        remind people to test xl.

hypervisor, nice to have:

      * solid implementation of sharing/paging/mem-events (using work
        queues) (Tim Deegan, Olaf Herring et al)
      * A long standing issue is a fully synchronized p2m (locking
        lookups) (Andres Lagar-Cavilla)
      * NUMA improvement: domain affinity consistent with cpupool
        membership (Dario Faggioli, Jeurgen Gross -- DONE)

tools, nice to have:

      * Hotplug script stuff -- internal to libxl (I think, therefore I
        didn't put this under stable API above) but still good to have
        for 4.2? Roger Pau Monet was looking at this but its looking
        like a big can-o-worms. (discussion on-going. patches posted?)
      * Block script support -- follows on from hotplug script (Roger
        Pau Monet)
      * libyajl v2 support (patch posted by Roger Pau Monet, general
        agrement that this shouldn't be blocked by autoconf but that we
        probably will take both)
      * Configure/control paging via xl/libxl (Olaf Herring)
      * Upstream qemu feature patches:
              * Upstream qemu PCI passthrough support (Anthony Perard)
              * Upstream qemu save restore (Anthony Perard)
      * Nested-virtualisation (currently should be marked
        experimental,likely to release that way? Consider nested-svm
        separate to nested-vmx. Nested-svm is in better shape)
      * Initial xl support for Remus (memory checkpoint, blackholing)
        (Shriram, patches posted)

Tools, need to decide if pre- or post-4.2 feature:

      * Autoconf (Roger Pau Monet posted a patch, we'll probably take
        this for 4.2)


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.