[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 3/4] x86: Allow non-faulting accesses to non-emulated MSRs if policy permits this
On Thu, Feb 18, 2021 at 12:57:13PM +0100, Jan Beulich wrote: > On 18.02.2021 12:24, Roger Pau Monné wrote: > > On Wed, Jan 20, 2021 at 05:49:11PM -0500, Boris Ostrovsky wrote: > >> --- a/xen/arch/x86/hvm/vmx/vmx.c > >> +++ b/xen/arch/x86/hvm/vmx/vmx.c > >> @@ -3017,8 +3017,8 @@ static int vmx_msr_read_intercept(unsigned int msr, > >> uint64_t *msr_content) > >> break; > >> } > >> > >> - gdprintk(XENLOG_WARNING, "RDMSR 0x%08x unimplemented\n", msr); > >> - goto gp_fault; > >> + if ( guest_unhandled_msr(curr, msr, msr_content, false, true) ) > >> + goto gp_fault; > >> } > >> > >> done: > >> @@ -3319,10 +3319,8 @@ static int vmx_msr_write_intercept(unsigned int > >> msr, uint64_t msr_content) > >> is_last_branch_msr(msr) ) > >> break; > >> > >> - gdprintk(XENLOG_WARNING, > >> - "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n", > >> - msr, msr_content); > >> - goto gp_fault; > >> + if ( guest_unhandled_msr(v, msr, &msr_content, true, true) ) > >> + goto gp_fault; > >> } > > > > I think this could be done in hvm_msr_read_intercept instead of having > > to call guest_unhandled_msr from each vendor specific handler? > > > > Oh, I see, that's likely done to differentiate between guest MSR > > accesses and emulator ones? I'm not sure we really need to make a > > difference between guests MSR accesses and emulator ones, surely in > > the past they would be treated equally? > > We did discuss this before. Even if they were treated the same in > the past, that's not correct, and hence we shouldn't suppress the > distinction going forward. A guest explicitly asking to access an > MSR (via RDMSR/WRMSR) is entirely different from the emulator > perhaps just probing an MSR, falling back to some default behavior > if it's unavailable. Ack, then placing the calls to guest_unhandled_msr in vendor code seems like the best option. Thanks, Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |