|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 7/9] x86/vmx: Support load-only guest MSR list entries
On Tue, May 22, 2018 at 12:20:44PM +0100, Andrew Cooper wrote:
> Currently, the VMX_MSR_GUEST type maintains completely symmetric guest load
> and save lists, by pointing VM_EXIT_MSR_STORE_ADDR and VM_ENTRY_MSR_LOAD_ADDR
> at the same page, and setting VM_EXIT_MSR_STORE_COUNT and
> VM_ENTRY_MSR_LOAD_COUNT to the same value.
>
> However, for MSRs which we won't let the guest have direct access to, having
> hardware save the current value on VMExit is unnecessary overhead.
>
> To avoid this overhead, we must make the load and save lists asymmetric. By
> making the entry load count greater than the exit store count, we can maintain
> two adjacent lists of MSRs, the first of which is saved and restored, and the
> second of which is only restored on VMEntry.
>
> For simplicity:
> * Both adjacent lists are still sorted by MSR index.
> * It undefined behaviour to insert the same MSR into both lists.
> * The total size of both lists is still limited at 256 entries (one 4k page).
>
> Split the current msr_count field into msr_{load,save}_count, and introduce a
> new VMX_MSR_GUEST_LOADONLY type, and update vmx_{add,find}_msr() to calculate
> which sublist to search, based on type. VMX_MSR_HOST has no logical sublist,
> whereas VMX_MSR_GUEST has a sublist between 0 and the save count, while
> VMX_MSR_GUEST_LOADONLY has a sublist between the save count and the load
> count.
>
> One subtle point is that inserting an MSR into the load-save list involves
> moving the entire load-only list, and updating both counts.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
Just one nit below.
> @@ -1423,8 +1446,11 @@ int vmx_add_msr(struct vcpu *v, uint32_t msr, uint64_t
> val,
> break;
>
> case VMX_MSR_GUEST:
> - __vmwrite(VM_EXIT_MSR_STORE_COUNT, ++arch_vmx->msr_count);
> - __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, arch_vmx->msr_count);
> + __vmwrite(VM_EXIT_MSR_STORE_COUNT, ++arch_vmx->msr_save_count);
> +
> + /* Fallthrough */
> + case VMX_MSR_GUEST_LOADONLY:
> + __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, ++arch_vmx->msr_load_count);
> break;
> }
Would it make sense to add something like:
ASSERT(arch_vmx->msr_save_count <= arch_vmx->msr_load_count);
Thanks.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |