[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Radical proposal v2: Publish Amazon's verison now, Citrix's version soon
>>> On 11.01.18 at 17:56, <sstabellini@xxxxxxxxxx> wrote: > On Thu, 11 Jan 2018, Rich Persaud wrote: >> On Jan 11, 2018, at 11:36, Stefano Stabellini <sstabellini@xxxxxxxxxx> wrote: >> > >> >> On Thu, 11 Jan 2018, George Dunlap wrote: >> >>> On 01/11/2018 04:23 PM, Stefano Stabellini wrote: >> >>> On Thu, 11 Jan 2018, Jan Beulich wrote: >> >>>>>>> On 10.01.18 at 18:25, <sstabellini@xxxxxxxxxx> wrote: >> >>>>>> On Wed, 10 Jan 2018, George Dunlap wrote: >> >>>>>> * Executive summary >> >>>>>> >> >>>>>> - We've agreed on a "convergence" point for PV shim functionality that >> >>>>>> covers as many users as possible: >> >>>>>> - 'HVM' functionality: boots in HVM mode, has support for Xen 3.4 >> >>>>>> event channels, &c, booted via 'sidecar' >> >>>>>> - 'PVH' functionality: boots in PVH mode, booted via toolstack >> >>>>>> changes >> >>>>>> >> >>>>>> - "Vixen" (the Amazon shim) and PVH shim (mostly developed by Citrix) >> >>>>>> each cover some users and not others; neither one (yet) covers all >> >>>>>> users >> >>>>> >> >>>>> Sorry for being punctilious, but neither one can cover all users: there >> >>>>> are users without VT-x on their platform, and both approaches require >> >>>>> VT-x. >> >>>> >> >>>> For the record, yesterday I've decided to make an attempt to >> >>>> create a very simplistic patch to deal with the issue in the >> >>>> hypervisor, ignoring (almost) all performance considerations >> >>>> (not all, because I didn't want to go the "disable caching" route). >> >>>> I've dealt with some of the to-be-expected early bugs, but I'm >> >>>> now debugging a host hang (note: not a triple fault apparently, >> >>>> as the box doesn't reboot, yet triple faults is what I would have >> >>>> expected to occur if anything is wrong here or missing). >> >>>> >> >>>> I know that's late, and I have to admit that I don't understand >> >>>> myself why I didn't consider doing such earlier on, but the >> >>>> much increased pressure to get something like the shim out, >> >>>> which >> >>>> - doesn't address all cases >> >>>> - requires changes to how VMs are being created (which likely will >> >>>> be a problem for various customers) >> >>>> - later will want those changes undone >> >>>> plus the pretty obvious impossibility to backport something like >> >>>> Andrew's (not yet complete) series to baselines as old as 3.2 >> >>>> made it seem to me that some (measurable!) performance >> >>>> overhead can't be all that bad in the given situation. >> >>> >> >>> Thank you for giving it a look! I completely agree with you on these >> >>> points. I think we should approach this problem with the assumption that >> >>> this is going to be the only long term solution to SP3, while Vixen (or >> >>> PVshim) incomplete stopgaps for now. >> >> >> >> Well the pvshim is a feature for people who want to be able to eliminate >> >> all PV interfaces to the hypervisor whatsover for security / maintenance >> >> purposes. I do agree a "proper" fix for PV would be good, assuming the >> >> overhead is lower than pvshim. >> > >> > Why "assuming the overhead is lower than pvshim"? What if the overhead >> > is higher? As I said, there are users that *cannot* deploy HVM because >> > it is not available to them. >> > >> > In other words, PVshim is irrelevant to me because I cannot use it. >> >> Would a “proper” PV fix (does this have a codename?) benefit stubdoms? > These are needed to isolate Qemu, e.g. on an HVM driver domain. PVshim does > not yet support driver domains. > > Yes, good point. A "proper" fix should support stubdoms too. I think > that Jan's approach above should be able to cover them. Well, any in-hypervisor workaround for PV will - naturally - cover all forms of PV guests. I don't view my patch (which allows Dom0 to come up as of five minutes ago) as a permanent solution though; I'm pretty convinced Andrew's series, once completed, would have much better performance characteristics. But for backporting purpose I think a single patch mostly using infrastructure which have been around forever is a better basis, and the performance impact at least on really old versions then would simply need to be accepted. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |