[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
On Fri, Feb 19, 2021 at 09:56:32AM -0500, Boris Ostrovsky wrote: > > On 2/18/21 5:51 AM, Roger Pau Monné wrote: > > On Wed, Jan 20, 2021 at 05:49:10PM -0500, Boris Ostrovsky wrote: > >> When toolstack updates MSR policy, this MSR offset (which is the last > >> index in the hypervisor MSR range) is used to indicate hypervisor > >> behavior when guest accesses an MSR which is not explicitly emulated. > > It's kind of weird to use an MSR to store this. I assume this is done > > for migration reasons? > > > Not really. It just seemed to me that MSR policy is the logical place to do > that. Because it *is* a policy of how to deal with such accesses, isn't it? I agree that using the msr_policy seems like the most suitable place to convey this information between the toolstack and Xen. I wonder if it would be fine to have fields in msr_policy that don't directly translate into an MSR value? But having such a list of ignored MSRs in msr_policy makes the whole get/set logic a bit weird, as the user would have to provide a buffer in order to get the list of ignored MSRs. > > > Isn't is possible to convey this data in the xl migration stream > > instead of having to pack it with MSRs? > > > I haven't looked at this but again --- the feature itself has nothing to do > with migration. The fact that folding it into policy makes migration of this > information "just work" is just a nice side benefit of this implementation. IMO it feels slightly weird that we have to use a MSR (MSR_UNHANDLED) to store this option, seems like wasting an MSR index when there's really no need for it to be stored in an MSR, as we don't expose it to guests. It would seem more natural for such option to live in arch_domain as a rangeset for example. Maybe introduce a new DOMCTL to set it? #define XEN_DOMCTL_msr_set_ignore ... struct xen_domctl_msr_set_ignore { uint32_t index; uint32_t size; }; Thanks, Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |