[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] x86's context switch ordering of operations



Jan Beulich wrote:
1) How does the storing of vcpu_info_mfn in the hypervisor survive
migration or save/restore? The mainline Linux code, which uses this
hypercall, doesn't appear to make any attempt to revert to using the
default location during suspend or to re-setup the alternate location
during resume (but of course I'm not sure that guest is save/restore/
migrate ready in the first place). I would imagine it to be at least
difficult for the guest to manage its state post resume without the
hypervisor having restored the previously established alternative
placement.

The only kernel which uses it is 32-on-32 pvops, and that doesn't currently support migration. It would be easy for the guest to restore that state for itself shortly after resuming.

I still need to add 32-on-64 and 64-on-64 implementations for this. Just haven't looked at it yet.

2) The implementation in the hypervisor seems to have added yet another
scalibility issue (on 32-bits), as this is being carried out using
map_domain_page_global() - if there are sufficiently many guests with
sufficiently many vCPU-s, there just won't be any space left at some
point. This worries me especially in the context of seeing a call to
sh_map_domain_page_global() that is followed by a BUG_ON() checking
whether the call failed.

Yes, we discussed it, and, erm, don't do that. Guests should be able to deal with VCPUOP_register_vcpu_info failing, but that doesn't address overall heap starvation.

   J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.