[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 06/11] ioreq: allow dispatching ioreqs to internal servers



On Fri, Sep 20, 2019 at 01:35:13PM +0200, Jan Beulich wrote:
> On 03.09.2019 18:14, Roger Pau Monne wrote:
> > --- a/xen/arch/x86/hvm/ioreq.c
> > +++ b/xen/arch/x86/hvm/ioreq.c
> > @@ -1493,9 +1493,18 @@ int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, 
> > bool buffered)
> >  
> >      ASSERT(s);
> >  
> > +    if ( hvm_ioreq_is_internal(id) && buffered )
> > +    {
> > +        ASSERT_UNREACHABLE();
> > +        return X86EMUL_UNHANDLEABLE;
> > +    }
> > +
> >      if ( buffered )
> >          return hvm_send_buffered_ioreq(s, proto_p);
> 
> Perhaps better (to avoid yet another conditional on the non-
> buffered path)
> 
>     if ( buffered )
>     {
>         if ( likely(!hvm_ioreq_is_internal(id)) )
>             return hvm_send_buffered_ioreq(s, proto_p);
> 
>         ASSERT_UNREACHABLE();
>         return X86EMUL_UNHANDLEABLE;
>     }
> 
> ?

Sure.

> > +    if ( hvm_ioreq_is_internal(id) )
> > +        return s->handler(curr, proto_p, s->data);
> 
> At this point I'm becoming curious what the significance of
> ioreq_t's state field is for internal servers, as nothing was
> said so far in this regard: Is it entirely unused? Is every
> handler supposed to drive it? If so, what about return value
> here and proto_p->state not really matching up? And if not,
> shouldn't you update the field here, at the very least to
> avoid any chance of confusing callers?

The ioreq state field when used by internal servers is modified here
in order to use it as an indication of long-running operations, but
that's introduced in patch 11/11 ('ioreq: provide support for
long-running operations...').

> 
> A possible consequence of the answers to this might be for
> the hook's middle parameter to be constified (in patch 4).

Yes, I think it can be constified.

> Having said this, as a result of having looked at some of the
> involved code, and with the cover letter not clarifying this,
> what's the reason for going this seemingly more complicated
> route, rather than putting vPCI behind the hvm_io_intercept()
> machinery, just like is the case for other internal handling?

If vPCI is handled at the hvm_io_intercept level (like its done ATM)
then it's not possible to have both (external) ioreq servers and vPCI
handling accesses to different devices in the PCI config space, since
vPCI would trap all accesses to the PCI IO ports and the MCFG regions
and those would never reach the ioreq processing.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.