[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] x86 Community Call - Wed Aug 15, 14:00 - 15:00 UTC - Agenda items



>>> On 13.08.18 at 09:46, <lars.kurth@xxxxxxxxxx> wrote:
> proposed topics so far:
>     * 4.10+ changes to Xen's memory scrubbing: discussion of the changes 
> that made to it in recent versions of Xen (4.10+) - Christopher
>     * Project Management stuff to keep the Momentum going - primarily 
> looking for Intel updates

Timing is not really good for this, but deferring to the next meeting is
also too long. I realize everyone's quite busy, and I'm myself also
struggling to get to look at
- VMX MSRs policy for Nested Virt: part 1 (I've looked over this, and I
  think it's okay, but I also think that in particular nested stuff wants
  both maintainers and Andrew to look over)
- vpci: add support for SR-IOV capability
- paravirtual IOMMU interface
- x86/domctl: Save info for one vcpu instance
- SSBD AMD via LS CFG Enablement
and not to speak of "add vIOMMU support with irq remapping function
of virtual VT-d". I'm however myself as well in an increasingly awkward
position to do / post further work, due to there being patch series
stalled in part from long before the 4.11 freeze (listing only series here,
there are also individual stalled patches):
- x86: improve PDX <-> PFN and alike translations
- x86: assorted assembly related cleanup
- x86: indirect call overhead reduction
- x86/HVM: implement memory read caching
- x86: more power-efficient CPU parking
And I'm not even daring to guess what is going to happen to the AVX512
patches that I have in the works for the emulator.

Bottom line - I think we need to talk about how we mean to unblock
large chunks of work.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.