> > I am just wondering how much dom0 cares about this? I mean if you use
> > blkback, netback - etc - they are all in the kernel. The device drivers
> > are also in the kernel.
> There are always going to be some userspace processes, even with
Stubdomains? Linux HVM's have now PVonHVM - and for Windows there are
multitude of PV drivers available? But sure there are some processes - like
snort or other packet filtering userland software.
> Besides if we have HVM dom0, we can enable
> XENFEAT_auto_translated_physmap and EPT and have the same level of
> performances of a PV on HVM guest. Moreover since we wouldn't be using
> the mmu pvops anymore we could drop them completely: that would greatly
Sure. It also means you MUST have an IOMMU in the box.
> simplify the Xen maintenance in the Linux kernel as well as gain back
> some love from the x86 maintainers :)
> The way I see it, normal Linux guests would be PV on HVM guests, but we
> still need to do something about dom0.
> This work would make dom0 exactly like PV on HVM guests apart from
> the boot sequence: dom0 would still boot from xen_start_kernel,
> everything else would be pretty much the same.
Ah, so not HVM exactly (you would only use the EPT/NPT/RV1/HAP for
pagetables).. and PV for startup, spinlock, timers, debug, CPU, and
backends. Thought sticking in the HVM container in PV that Mukesh
made work would also benefit.
Or just come back to the idea of "real" HVM device driver domains
and have the PV dom0 be a light one loading the rest. But the setup of
it is just so complex.. And the PV dom0 needs to deal with the PCI backend
xenstore, and able to comprehend ACPI _PRT... and then launch the "device
driver" Dom0, which at its simplest form would have all of the devices
passed in to it.
So four payloads: PV dom0, PV dom0 initrd, HVM dom0, HVM dom0 initrd :-)
Ok, that is too cumbersome. Maybe ingest the PV dom0+initrd in the Xen
hypervisor binary.. I should stop here.
Xen-devel mailing list