[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH v2] Enable SMEP CPU feature support for XEN hypervisor



>>+/* nosmep: if true, Intel SMEP is disabled. */
>>+static bool_t disable_smep __cpuinitdata;
>
>__initdata
>
>>+boolean_param("nosmep", disable_smep);
>>+
>>+static void __cpuinit setup_smep(struct cpuinfo_x86 *c)
>
>__init

I did want to change the code like you suggested to only call it in BP code
path, but met abnormal system response issue.  To fix it, I need change
some other AP boot code path as well. While this patch is for SMEP.

Actually your suggestion applies to some other system initialization variables
and functions.  I prefer to enhance all such cases in another patchset.

I also mentioned this in another reply to Keir.

>>+{
>>+    if ( cpu_has(c, X86_FEATURE_SMEP) ) {
>>+        if ( unlikely(disable_smep) )
>>+            setup_clear_cpu_cap(X86_FEATURE_SMEP);
>>+        else {
>>+            set_in_cr4(X86_CR4_SMEP);
>>+            mmu_cr4_features |= X86_CR4_SMEP;

>Why? We old you on the previous patch version that
>set_in_cr4() already does this.

I misunderstood the code even you reminded...
 
>>@@ -268,6 +284,8 @@ void __cpuinit generic_identify(struct c
>>        c->x86_capability[X86_FEATURE_FSGSBASE / 32] = ebx;
>>    }
>>
>>+    setup_smep(c);
>
>Still misplaced.
>

As said above, it should be ok then.

>>+ u_flag = _PAGE_USER;

>How about
>
>u_flag = cpu_has_smep ? _PAGE_USER : 0;
>
>avoiding the repeated cpu_has_smep checks below.

Will do in newer version.

>* Disabling interrupts prevents TLB flushing, and hence prevents
>* page tables from becoming invalid under our feet during the walk.
>*/
>local_irq_save(flags);
>- is_spurious = __spurious_page_fault(addr, error_code);
>+ pf_type = __page_fault_type(addr, error_code);
>local_irq_restore(flags);
>
>- return is_spurious;
>+ if ( pf_type == smep_fault )
>+ domain_crash_synchronous();

>I don't think you can domain_crash_synchronous() here, domain_crash()
>is perhaps the only possibility (and even if it was possible, I think there
>was agreement to not add new calls to the former function and always
>use the latter instead).
>

so change to domain_crash()?

curious about the background how the agreement is reached?

>That said I'm not certain it's a good choice at all, as Keir had already
>pointed out on the first patch version.

Do you mean to crash the pv guest?

One case it can avoid is that an evil pv guest can plant malicious code at
virtual address 0, and then exploit null pointer in hypervisor to get control
of the system, although it's few.

>Finally, spurious_page_fault() has two call sites, only one of which is
>for faults that occurred in hypervisor context (which you seem to
>assume always being the case). For the one where it's getting called
>for a fault in guest context using domain_crash() is probably correct,
>but an (possibly guest chosen) alternative would be to forward the
>(non-spurious) fault to the guest. Again I thin Keir hinted in that
>direction already in response to the first patch version.

For 64 bit pv guest, smep fault can only happen in hypervisor mode (ring 0).

For 32 bit pv guest, smep fault can happen in both hypervisor mode and 
guest kernel mode (ring 0 & 1).  When it happens in ring1, we probably
can inject to guest kernel, and let it kill the current running process. But
we have decided not to let guest see SMEP as Keir said the guest cannot
influence whether SMEP is enabled/disabled.

Thanks!
-Xin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.