[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Rudolph: merging Vixen and Comet

Hi all,

Two solutions are proposed to mitigate Meltdown. One is called Vixen and the
other is called Comet. The long term goal is to merge the two implementations
to one.

Here I list the differences between the two implementations.

                      Vixen                          Comet
Boot mode             HVM                            PVH + HVM
Kconfig options       XEN_GUEST                      XEN_GUEST + PVH_GUEST + 
Xen build system      No change                      New build target for shim 
Guest console         Output only                    Bi-directional
Guest domid           1 or set via shim option       1 or retrieved via cpuid
Guest type            Hardware domain                Normal domain
Time source           Emulated                       Xen PV clock
Shutdown              PV + HW                        PV
SI mapping            Reserved page                  Fixed map, PFN chosen at 
VCPU id               Handled by L1                  Provide by L0 if available
VCPU runstate         Forwarded to L0                Handled by L1
Xen version           L0 version                     L1 version
CPUID faulting        None                           Changes for Intel and AMD
Grant table           What is forwarded is more or less the same but differs in 
Event channel setup   3 mechanisms                   1 mechanism
Event channel         ECS_PROXY state                Use ECS_RESERVED
                      Differences in what gets forwarded
Migration             No                             Yes
CPU hotplug           No                             Yes
Memory hotplug        No                             Yes

These are the things I can think of when comparing the two series side
by side.  Feel free to provide addition and / or correction.  The list
serves as a guidance on what areas need attention.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.