[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 COLOPre 08/26] tools/libxc: support to resume uncooperative HVM guests



On Thu, 2015-06-25 at 14:25 +0800, Yang Hongyang wrote:
> From: Wen Congyang <wency@xxxxxxxxxxxxxx>
> 
> 1. suspend
> a. PVHVM and PV: we use the same way to suspend the guest(send the suspend

space between "guest" and the open parenthesis please.

>    request to the guest). If the guest doesn't support evtchn, the xenstore
>    variant will be used, suspending the guest via XenBus control node.
> b. pure HVM: we call xc_domain_shutdown(..., SHUTDOWN_suspend) to suspend
>    the guest
> 
> 2. Resume:
> a. fast path
>    In this case, we don't change the guest's state.
>    PV: modify the return code to 1, and than call the domctl:
>        XEN_DOMCTL_resumedomain
>    PVHVM: same with PV
>    HVM: do nothing in modify_returncode, and than call the domctl:
>         XEN_DOMCTL_resumedomain
> b. slow
>    In this case, we have changed the guest's state.

"have" or "will"? AIUI the latter would be more accurate.

>    PV: update start info, and reset all secondary CPU states. Than call the
>    domctl: XEN_DOMCTL_resumedomain
>    PVHVM and HVM can not be resumed.

I'm confused -- isn't the purpose of this patch to make PVHM support
resume?

> For PVHVM, in my test, only call the domctl: XEN_DOMCTL_resumedomain
> can work. I am not sure if we should update start info and reset all
> secondary CPU states.
> 
> For pure HVM guest, in my test, only call the domctl:
> XEN_DOMCTL_resumedomain can work.
> 
> So we can call libxl__domain_resume(..., 1) if we don't change the guest
> state, otherwise call libxl__domain_resume(..., 0).

Hrm, so it sounds here like the correctness of this new functionality
requires the caller to have not messed with the domain's state? What
sort of changes are to the guest state are we talking about here?

Isn't that a new requirement for this call? If so then it should be
documented somewhere, specifically what sorts of changes are and are not
allowed and the types of guests which are affected.

> 
> Signed-off-by: Wen Congyang <wency@xxxxxxxxxxxxxx>
> Signed-off-by: Yang Hongyang <yanghy@xxxxxxxxxxxxxx>
> ---
>  tools/libxc/xc_resume.c | 22 ++++++++++++++++++----
>  1 file changed, 18 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/libxc/xc_resume.c b/tools/libxc/xc_resume.c
> index e67bebd..bd82334 100644
> --- a/tools/libxc/xc_resume.c
> +++ b/tools/libxc/xc_resume.c
> @@ -109,6 +109,23 @@ static int xc_domain_resume_cooperative(xc_interface 
> *xch, uint32_t domid)
>      return do_domctl(xch, &domctl);
>  }
>  
> +static int xc_domain_resume_hvm(xc_interface *xch, uint32_t domid)
> +{
> +    DECLARE_DOMCTL;
> +
> +    /*
> +     * If it is PVHVM, the hypercall return code is 0, because this
> +     * is not a fast path resume, we do not modify_returncode as in
> +     * xc_domain_resume_cooperative.
> +     * (resuming it in a new domain context)
> +     *
> +     * If it is a HVM, the hypercall is a NOP.
> +     */
> +    domctl.cmd = XEN_DOMCTL_resumedomain;
> +    domctl.domain = domid;
> +    return do_domctl(xch, &domctl);

There are already several open coded instances of this
XEN_DOMCTL_resumedomain, and I think putting this particular one into a
helper is actually more confusing than just inlining this at the caller.

In particular when reading this function my first question was "how do
we know this is not a fast path resume", the answer being that the only
caller is the slow path resume case, but that's not evident from the
context (what if someone adds a second call?)

So I think at least the comment ought to go at the callsite, at which
point this function doesn't add much.

(Ideally all the open coded do_domctl would go into a single
do_domainresume or something, but I don't think you need to do that
unless you really want to).




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.