[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/2] ioreq-server: Support scatter page forwarding



>>> On 22.08.14 at 21:18, <wei.ye@xxxxxxxxx> wrote:

Independent of my earlier and Paul's recent question regarding the
need for all this code, a couple of mechanical comments:

> @@ -744,6 +750,98 @@ static void hvm_ioreq_server_remove_all_vcpus(struct 
> hvm_ioreq_server *s)
>      spin_unlock(&s->lock);
>  }
>  
> +static struct pglist_hash_table *hvm_ioreq_server_lookup_pglist_hash_table(
> +                          struct pglist_hash_table *pglist_ht, uint64_t gpfn)
> +{
> +    unsigned int index = pglist_hash(gpfn);
> +    struct pglist_hash_table *entry;
> +
> +    for ( entry = &pglist_ht[index]; entry != NULL; entry = entry->next )
> +        if ( entry->gpfn != PGLIST_INVALID_GPFN && entry->gpfn == gpfn )

Please write efficient code: By making sure up front that gpfn isn't
PGLIST_INVALID_GPFN, the left side of the && (executed on every
loop iteration) can be dropped.

> +static int hvm_map_pages_to_ioreq_server(struct domain *d, ioservid_t id,
> +                                         uint16_t set, uint16_t nr_pages,

Pointless uses of uint16_t. The first one is bool_t, the second
unsigned int.

> +                                         uint64_t *gpfn)

And this one would conventionally by unsigned long * or xen_pfn_t *.

> +{
> +    struct hvm_ioreq_server *s;
> +    int rc;
> +    unsigned int i;
> +    p2m_type_t ot, nt;
> +
> +    if ( set )
> +    {
> +        ot = p2m_ram_rw;
> +        nt = p2m_mmio_write_dm;
> +    }
> +    else
> +    {
> +        ot = p2m_mmio_write_dm;
> +        nt = p2m_ram_rw;
> +    }
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server.lock);
> +
> +    rc = -ENOENT;
> +    list_for_each_entry ( s,
> +                          &d->arch.hvm_domain.ioreq_server.list,
> +                          list_entry )
> +    {
> +        if ( s != d->arch.hvm_domain.default_ioreq_server &&
> +             s->id == id )
> +        {
> +            spin_lock(&s->pglist_hash_lock);
> +
> +            for ( i = 0; i < nr_pages; i++ )

Up to 64k iterations without preemption is already quite a lot. Did
you measure how much worst case input would take to process?
(Note that I raised this question before without you addressing it.)

> --- a/xen/include/public/hvm/hvm_op.h
> +++ b/xen/include/public/hvm/hvm_op.h
> @@ -367,6 +367,18 @@ struct xen_hvm_set_ioreq_server_state {
>  typedef struct xen_hvm_set_ioreq_server_state 
> xen_hvm_set_ioreq_server_state_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_ioreq_server_state_t);
>  
> +#define HVMOP_map_pages_to_ioreq_server 23
> +#define XEN_IOREQSVR_MAX_MAP_BATCH      128
> +struct xen_hvm_map_pages_to_ioreq_server {
> +    domid_t domid;   /* IN - domain to be serviced */
> +    ioservid_t id;   /* IN - server id */
> +    uint16_t set;    /* IN - 1: map pages, 0: unmap pages */

The comment doesn't match the implementation.

> +    uint16_t nr_pages;  /* IN - page count */
> +    uint64_t page_list[XEN_IOREQSVR_MAX_MAP_BATCH]; /* IN - gpfn list */

Ah, so the iteration limit above really is XEN_IOREQSVR_MAX_MAP_BATCH.
That's fine then, except that you need to bound check the input
value before iterating.

However I wonder whether you really want to bake such a fixed
limit into the hypercall. Using a handle here and limiting the count
to a (currently) sane maximum in the implementation would then
allow for bumping the count in the future.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.