[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 09/11] vpci: register as an internal ioreq server



On Fri, Sep 27, 2019 at 10:29:21AM +0200, Paul Durrant wrote:
> > -----Original Message-----
> > From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
> > Sent: 26 September 2019 16:07
> > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> > Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; 
> > Wei Liu <wl@xxxxxxx>; Andrew
> > Cooper <Andrew.Cooper3@xxxxxxxxxx>; George Dunlap 
> > <George.Dunlap@xxxxxxxxxx>; Jan Beulich
> > <jbeulich@xxxxxxxx>; Julien Grall <julien.grall@xxxxxxx>; Konrad Rzeszutek 
> > Wilk
> > <konrad.wilk@xxxxxxxxxx>; Stefano Stabellini <sstabellini@xxxxxxxxxx>; Tim 
> > (Xen.org) <tim@xxxxxxx>
> > Subject: Re: [PATCH v2 09/11] vpci: register as an internal ioreq server
> > 
> > On Tue, Sep 10, 2019 at 03:49:41PM +0200, Paul Durrant wrote:
> > > > -----Original Message-----
> > > > From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
> > > > Sent: 03 September 2019 17:14
> > > > To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> > > > Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>; Ian Jackson 
> > > > <Ian.Jackson@xxxxxxxxxx>; Wei Liu
> > > > <wl@xxxxxxx>; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; George Dunlap 
> > > > <George.Dunlap@xxxxxxxxxx>;
> > Jan
> > > > Beulich <jbeulich@xxxxxxxx>; Julien Grall <julien.grall@xxxxxxx>; 
> > > > Konrad Rzeszutek Wilk
> > > > <konrad.wilk@xxxxxxxxxx>; Stefano Stabellini <sstabellini@xxxxxxxxxx>; 
> > > > Tim (Xen.org)
> > <tim@xxxxxxx>;
> > > > Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> > > > Subject: [PATCH v2 09/11] vpci: register as an internal ioreq server
> > > > @@ -478,6 +480,67 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, 
> > > > unsigned int size,
> > > >      spin_unlock(&pdev->vpci->lock);
> > > >  }
> > > >
> > > > +#ifdef __XEN__
> > > > +static int ioreq_handler(struct vcpu *v, ioreq_t *req, void *data)
> > > > +{
> > > > +    pci_sbdf_t sbdf;
> > > > +
> > > > +    if ( req->type == IOREQ_TYPE_INVALIDATE )
> > > > +        /*
> > > > +         * Ignore invalidate requests, those can be received even 
> > > > without
> > > > +         * having any memory ranges registered, see 
> > > > send_invalidate_req.
> > > > +         */
> > > > +        return X86EMUL_OKAY;
> > >
> > > In general, I wonder whether internal servers will ever need to deal with 
> > > invalidate? The code only
> > exists to get QEMU to drop its map cache after a decrease_reservation so 
> > that the page refs get
> > dropped.
> > 
> > I think the best solution here is to rename hvm_broadcast_ioreq to
> > hvm_broadcast_ioreq_external and switch it's callers. Both
> > send_timeoffset_req and send_invalidate_req seem only relevant to
> > external ioreq servers.
> 
> send_timeoffset_req() is relic which ought to be replaced with another 
> mechanism IMO...
> 
> When an HVM guest writes its RTC, a new 'timeoffset' value (offset of RTC 
> from host time) is calculated (also applied to the PV wallclock) and 
> advertised via this ioreq. In XenServer, this is picked up by QEMU, forwarded 
> via QMP to XAPI and then written into the VM meta-data (which than causes it 
> to be written into xenstore too). All this is so that that guest's RTC can be 
> set correctly when it is rebooted... There has to be a better way (e.g. 
> extracting RTC via hvm context and saving it before cleaning up the domain).
> 
> send_invalidate_req() is relevant for any emulator maintaining a cache of 
> guest->host memory mappings which, I guess, could include internal emulators 
> even if this is not the case at the moment.

Maybe, but I would expect an internal emulator to get a reference on
the gfn if it does need to keep it in some kind of cache, or else I
don't think code in the hypervisor should be keeping such references.
IMO I would start by not forwarding invalidate requests to internal
emulators. We can always change this in the future if we come up with
a use-cases that needs it.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.