[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 13/25] argo: implement the register op



On Thu, Dec 20, 2018 at 11:28 PM Jan Beulich <JBeulich@xxxxxxxx> wrote:
>
> >>> On 21.12.18 at 02:25, <christopher.w.clark@xxxxxxxxx> wrote:
> > On Thu, Dec 20, 2018 at 12:29 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:
> >>
> >> >>> On 20.12.18 at 06:29, <christopher.w.clark@xxxxxxxxx> wrote:
> >> > On Wed, Dec 12, 2018 at 1:48 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:
> >> >>
> >> >> > +static int
> >> >> > +argo_find_ring_mfns(struct domain *d, struct argo_ring_info 
> >> >> > *ring_info,
> >> >> > +                    uint32_t npage, 
> >> >> > XEN_GUEST_HANDLE_PARAM(argo_pfn_t) pfn_hnd,
> >> >> > +                    uint32_t len)
> >> >> > +{
> >> >> > +    int i;
> >> >> > +    int ret = 0;
> >> >> > +
> >> >> > +    if ( (npage << PAGE_SHIFT) < len )
> >> >> > +        return -EINVAL;
> >> >> > +
> >> >> > +    if ( ring_info->mfns )
> >> >> > +    {
> >> >> > +        /*
> >> >> > +         * Ring already existed. Check if it's the same ring,
> >> >> > +         * i.e. same number of pages and all translated gpfns still
> >> >> > +         * translating to the same mfns
> >> >> > +         */
> >> >>
> >> >> This comment makes me wonder whether the translations are
> >> >> permitted to change at other times. If so I'm not sure what
> >> >> value verification here has. If not, this probably would want to
> >> >> be debugging-only code.
> >> >
> >> > My understanding is that the gfn->mfn translation is not necessarily 
> >> > stable
> >> > across entry and exit from host power state S4, suspend to disk.
>
> Now I'm afraid there's some confusion here: Originally you've
> said "host".
>
> >> How would that be? It's not stable across guest migration (or
> >> its non-live save/restore equivalent),
> >
> > Right, that's clear.
> >
> >> but how would things change across S3?
> >
> > I don't think that they do change in that case.
> >
> > From studying the code involved above, a related item: the guest runs the 
> > same
> > suspend and resume kernel code before entering into/exiting from either 
> > guest
> > S3 or S4, so the guest kernel resume code needs to re-register the rings, to
> > cover the case where it is coming up in an environment where they were 
> > dropped
> > - so that's what it does.
> >
> > This relates to the code section above: if guest entry to S3 is aborted at 
> > the
> > final step (eg. error or platform refuses, eg. maybe a physical device
> > interaction with passthrough) then the hypervisor has not torn down the 
> > rings,
> > the guest remains running within the same domain, and the guest resume logic
> > runs, which runs through re-registration for all its rings. The check in the
> > logic above allows the existing ring mappings within the hypervisor to be
> > preserved.
>
> Yet now you suddenly talk about guest S3.

Well, the context is that you did just ask about S3, without
specifying host or guest. Host S3 doesn't involve much at all, so I
went and studied the code in both the Linux driver and the hypervisor
to determine what it does in the case of guest S3, and then replied
with the above since it is relevant to the code in question. I hope I
was clear about referring to guest S3 above in my last reply.

That logic aims to make ring registration idempotent, to avoid the
teardown of established mappings of the ring pages in the case where
doing so isn't needed.

> >> And there's no support for S4 (and I can't see it appearing any time soon).
> >
> > OK. oh well.
>
> Considering the original "host" context, my response here was
> relating to host S4. Guest S4 ought to be functional (as being
> mostly a guest kernel function anyway).

ack.

Christopher

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.