[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v2] x86/domctl: Don't pause the whole domain if only getting vcpu state



On Mi, 2018-02-21 at 18:24 +0000, George Dunlap wrote:
> On Fri, Oct 6, 2017 at 11:02 AM, Alexandru Isaila
> <aisaila@xxxxxxxxxxxxxxx> wrote:
> >
> > This patch adds the hvm_save_one_cpu_ctxt() function.
> > It optimizes by only pausing the vcpu on all HVMSR_PER_VCPU save
> > callbacks where only data for one VCPU is required.
> Sorry it's taken so long to get back to you on this one.
>
> So first of all, a big issue with this patch in general is that
> there's way too much code duplication.  Duplicating the code in every
> case will not only increase HV code size and decrease cache
> effectiveness, it will also increase the likelihood of subtle bugs
> creeping in when the two copies don't match up.  If you were going to
> do it this way I think you'd basically have to make a *_save_*_one()
> function for each callback.

That sounds good.
>
> The only other option would be to pass a cpumask instead, and set
> either a single bit or all the bits; but that seems a bit wasteful.
> (Jan, Andy -- if you don't like this solution, now is the time to say
> so.)
>
> However -- whole pausing only one vcpu for register state should be
> fine, are you sure that it's fine for all the other callbacks?  I.e.,
> are you sure that there are no instances where vcpu N will directly
> modify vcpu M's entry in a way that will give you an inconsistent
> save
> record?

I am not sure, I've only tested the hvm_save_cpu_ctxt. The rest of the
save functions where changed so that the patch would be general.
>
> If so, you should mention it in the commit message; otherwise, it
> might be better to make this only for the cpu context (since that
> seems to be what you're mostly interested in).

Yes, I am interested in the cpu ctxt but Jan suggested to make this
change for all save functions.
>
> Further comments below...
>
> >
> > -static int hvm_save_cpu_ctxt(struct domain *d,
> > hvm_domain_context_t *h)
> > +void hvm_save_one_cpu_ctxt(struct vcpu *v, struct hvm_hw_cpu
> > *ctxt)
> I'd call this hvm_save_cpu_ctxt_one()  (with the _one at the end);
> same with all the other singleton callbacks.
>
> >
> > diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
> > index 8984a23..97b56f7 100644
> > --- a/xen/arch/x86/hvm/save.c
> > +++ b/xen/arch/x86/hvm/save.c
> > @@ -138,6 +138,7 @@ int hvm_save_one(struct domain *d, unsigned int
> > typecode, unsigned int instance,
> >      int rv;
> >      hvm_domain_context_t ctxt = { };
> >      const struct hvm_save_descriptor *desc;
> > +    bool is_single_instance = false;
> As far as I can tell is mostly unnecessary -- if you look at the
> original code, the rather wonky algorithm is:
>
> * Pause all vcpus
> * Collect data for all vcpus
> * Loop over this data until we find one whose instance id matches the
> one we're looking for, and copy it out.
> * Unpause all vcpus
>
> In other words, hvm_save_one(), as the name suggests, only copies a
> single instance anyway.  If anyone ever passed d->max_vcpus, then
> nothing would be copied (since none of the individual records would
> ever match that "instance").
>
> You will need to pause the whole domain for callbacks not of type
> HVMSR_PER_VCPU; but in any case you'll only ever have to copy a
> single
> record.  So you can just re-write this function without the loop, now
> that we have a way to ask the individual callbacks for only a single
> instance.
>
> One other note about the patch as it is: In the error path you forget
> to un-pause.
Thanks, will do.

~Alex
>
>  -George
>

________________________
This email was scanned by Bitdefender
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.