[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] XEN_DOMCTL_setvcpucontext and domain_pause()
Hello, Looking at xen/common/domctl.c, it appears that during handling of XEN_DOMCTL_setvcpucontext, a domain_pause() happens unconditionally: 465 if ( ret == 0 ) 466 { 467 domain_pause(d); 468 ret = arch_set_info_guest(v, c); 469 domain_unpause(d); 470 471 if ( ret == -ERESTART ) 472 ret = hypercall_create_continuation( 473 __HYPERVISOR_domctl, "h", u_domctl); 474 } I assume that this is because in xen/arch/x86/domain.c, arch_set_info_guest() uses v->domain here: 1101 if ( v->vcpu_id == 0 ) 1102 { 1103 /* 1104 * In the restore case we need to deal with L4 pages which got 1105 * initialized with m2p_strict still clear (and which hence lack the 1106 * correct initial RO_MPT_VIRT_{START,END} L4 entry). 1107 */ 1108 if ( d != current->domain && !VM_ASSIST(d, m2p_strict) && 1109 is_pv_domain(d) && !is_pv_32bit_domain(d) && 1110 test_bit(VMASST_TYPE_m2p_strict, &c.nat->vm_assist) && 1111 atomic_read(&d->arch.pv_domain.nr_l4_pages) ) 1112 { 1113 bool_t done = 0; 1114 1115 spin_lock_recursive(&d->page_alloc_lock); 1116 1117 for ( i = 0; ; ) 1118 { 1119 struct page_info *page = page_list_remove_head(&d->page_list); 1120 1121 if ( page_lock(page) ) 1122 { 1123 if ( (page->u.inuse.type_info & PGT_type_mask) == 1124 PGT_l4_page_table ) 1125 done = !fill_ro_mpt(page_to_mfn(page)); 1126 1127 page_unlock(page); 1128 } 1129 1130 page_list_add_tail(page, &d->page_list); 1131 1132 if ( done || (!(++i & 0xff) && hypercall_preempt_check()) ) 1133 break; 1134 } 1135 1136 spin_unlock_recursive(&d->page_alloc_lock); 1137 1138 if ( !done ) 1139 return -ERESTART; 1140 } 1141 1142 d->vm_assist = c(vm_assist); 1143 } However, this is not optimal for HVM guests. Since only PV guests would seem to require a domain pause (see the is_pv_domain(d) test at line 1109, I propose to either check that the domain is PV and only then pause / unpause it in domctl.c, or only pause it if necessary in arch_set_info_guest(). If I haven't missed anything here, I'm happy to post a patch. Thanks, Razvan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |