[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 5/8] ioreq-server: add support for multiple servers



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 09 April 2014 14:47
> To: Paul Durrant
> Cc: Ian Campbell; Ian Jackson; Stefano Stabellini; xen-devel@xxxxxxxxxxxxx
> Subject: RE: [PATCH v4 5/8] ioreq-server: add support for multiple servers
> 
> >>> On 09.04.14 at 15:32, <Paul.Durrant@xxxxxxxxxx> wrote:
> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >> >>> On 02.04.14 at 17:11, <paul.durrant@xxxxxxxxxx> wrote:
> >> Also, I didn't see a limit being enforced on the number of elements
> >> that can be added to these lists, yet allowing this to be unlimited is
> >> a latent security issue.
> >>
> >
> > Guest domains cannot add to the lists, only the emulating domain, but if
> > that is unprivileged then, yes, that is a security issue.
> 
> And hence it needs to be fixed, or the operation added to the list of
> disaggregation-unsafe ones (which XSA-77 added). I'd clearly favor
> the former...
> 
> >> >  struct hvm_domain {
> >> > +    /* Guest page range used for non-default ioreq servers */
> >> > +    unsigned long           ioreq_gmfn_base;
> >> > +    unsigned int            ioreq_gmfn_count;
> >> > +    unsigned long           ioreq_gmfn_mask;
> >> > +
> >> > +    /* Lock protects all other values in the following block */
> >> >      spinlock_t              ioreq_server_lock;
> >> > -    struct hvm_ioreq_server *ioreq_server;
> >> > +    ioservid_t              ioreq_server_id;
> >> > +    struct list_head        ioreq_server_list;
> >> > +    unsigned int            ioreq_server_count;
> >> > +    struct hvm_ioreq_server *default_ioreq_server;
> >> > +
> >> > +    /* Cached CF8 for guest PCI config cycles */
> >> > +    uint32_t                pci_cf8;
> >> > +    spinlock_t              pci_lock;
> >>
> >> Please consider padding when adding new fields here - try grouping
> >> 64-bit quantities together rather than alternating between 32- and
> >> 64-bit ones.
> >
> > Why do we need to care about padding? Re-ordering for efficiency of
> space is
> > reasonable.
> 
> That's what I meant - try to avoid unnecessary padding.
> 
> >> > --- a/xen/include/asm-x86/hvm/hvm.h
> >> > +++ b/xen/include/asm-x86/hvm/hvm.h
> >> > @@ -228,7 +228,8 @@ int prepare_ring_for_helper(struct domain *d,
> >> unsigned long gmfn,
> >> >                              struct page_info **_page, void **_va);
> >> >  void destroy_ring_for_helper(void **_va, struct page_info *page);
> >> >
> >> > -bool_t hvm_send_assist_req(struct vcpu *v, const ioreq_t *p);
> >> > +bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p);
> >>
> >> Any reason you couldn't avoid adding the const in one of the earlier
> >> patches?
> >>
> >
> > You asked for it in a previous review. I'm happy to lose the const again.
> 
> If it doesn't persist through the series, there's little point in adding it.
> 
> >> And for this to be usable with other architectures that may have
> >> address spaces other than memory and I/O ports it would seem
> >> desirable to not consider this a boolean, but an enumerator.
> >
> > Maybe it would be better to consolidate io ranges and pci devs then and
> the
> > existing ioreq type values in the interface. I.e:
> >
> > #define IOREQ_TYPE_PIO          0 /* pio */
> > #define IOREQ_TYPE_COPY         1 /* mmio ops */
> 
> Right, except that "COPY" is sort of odd here - why not "MMIO"?
> 

Good question. Historical I imagine. I prefer the term MMIO but changing the 
ioreq header would probably break many things so I's aim to #include it and 
then do something like

#define RANGE_TYPE_PORTIO       (IOREQ_TYPE_PIO)
#define RANGE_TYPE_MMIO (IOREQ_TYPE_COPY)

Etc.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.