[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH-for-4.9 v1 6/8] dm_op: convert HVMOP_set_mem_type



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 25 November 2016 13:51
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu
> <wei.liu2@xxxxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; xen-
> devel@xxxxxxxxxxxxxxxxxxxx; Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
> Subject: Re: [Xen-devel] [PATCH-for-4.9 v1 6/8] dm_op: convert
> HVMOP_set_mem_type
> 
> >>> On 18.11.16 at 18:14, <paul.durrant@xxxxxxxxxx> wrote:
> > --- a/tools/libxc/xc_misc.c
> > +++ b/tools/libxc/xc_misc.c
> > @@ -584,28 +584,18 @@ int xc_hvm_modified_memory( int
> xc_hvm_set_mem_type(
> >      xc_interface *xch, domid_t dom, hvmmem_type_t mem_type, uint64_t
> first_pfn, uint64_t nr)
> >  {
> > -    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_mem_type, arg);
> > -    int rc;
> > -
> > -    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> > -    if ( arg == NULL )
> > -    {
> > -        PERROR("Could not allocate memory for xc_hvm_set_mem_type
> hypercall");
> > -        return -1;
> > -    }
> > +    struct xen_dm_op op;
> > +    struct xen_dm_op_set_mem_type *data;
> >
> > -    arg->domid        = dom;
> > -    arg->hvmmem_type  = mem_type;
> > -    arg->first_pfn    = first_pfn;
> > -    arg->nr           = nr;
> > +    op.op = DMOP_set_mem_type;
> > +    data = &op.u.set_mem_type;
> >
> > -    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
> > -                  HVMOP_set_mem_type,
> > -                  HYPERCALL_BUFFER_AS_ARG(arg));
> > -
> > -    xc_hypercall_buffer_free(xch, arg);
> > +    data->mem_type = mem_type;
> > +    data->first_pfn = first_pfn;
> > +    /* NOTE: The following assignment truncates nr to 32-bits */
> > +    data->nr = nr;
> 
> What strange a comment. Why don't you - again as done in the
> hvmctl series - simply correct the function's parameter type?
> (Same for xc_hvm_track_dirty_vram() and
> xc_hvm_modified_memory() then.)

Because that may cause compiler warnings in clients when they grab the new 
version of the header. I didn't want to have any adverse effect so just 
commenting that the value was being truncated (as it always has been) seemed 
like the best thing to do.

> 
> > --- a/xen/arch/x86/hvm/dm.c
> > +++ b/xen/arch/x86/hvm/dm.c
> > @@ -160,6 +160,16 @@ static int dm_op_set_pci_link_route(struct domain
> *d, uint8_t link,
> >      return 0;
> >  }
> >
> > +static bool_t dm_op_allow_p2m_type_change(p2m_type_t old,
> p2m_type_t new)
> 
> bool
> 

Ok.

> > +{
> > +    if ( p2m_is_ram(old) ||
> > +         (p2m_is_hole(old) && new == p2m_mmio_dm) ||
> > +         (old == p2m_ioreq_server && new == p2m_ram_rw) )
> > +        return 1;
> 
> true
> 
> > +
> > +    return 0;
> 
> false (or perhaps even better a simple return statement, and perhaps
> you can by now guess where I could refer you)
> 
> > +}
> > +
> >  static int dm_op_modified_memory(struct domain *d, xen_pfn_t
> *first_pfn,
> 
> Any reason of putting it here rather than ...
> 
> > @@ -205,6 +215,79 @@ static int dm_op_modified_memory(struct domain
> *d, xen_pfn_t *first_pfn,
> >      return rc;
> >  }
> >
> > +
> > +static int dm_op_set_mem_type(struct domain *d, hvmmem_type_t
> mem_type,
> > +                              xen_pfn_t *first_pfn, unsigned int *nr)
> 
> ... right before this one (the helper of which it is)?
> 

No, I can move it.

> Also please don't add a second blank line above the function.
> 

Yes, that's a mistake.

> > +{
> > +    xen_pfn_t last_pfn = *first_pfn + *nr - 1;
> > +    unsigned int iter;
> > +    int rc;
> > +
> > +    /* Interface types to internal p2m types */
> > +    static const p2m_type_t memtype[] = {
> > +        [HVMMEM_ram_rw]  = p2m_ram_rw,
> > +        [HVMMEM_ram_ro]  = p2m_ram_ro,
> > +        [HVMMEM_mmio_dm] = p2m_mmio_dm,
> > +        [HVMMEM_unused] = p2m_invalid,
> > +        [HVMMEM_ioreq_server] = p2m_ioreq_server
> > +    };
> > +
> > +    if ( (*first_pfn > last_pfn) ||
> > +         (last_pfn > domain_get_maximum_gpfn(d)) )
> > +        return -EINVAL;
> > +
> > +    if ( mem_type >= ARRAY_SIZE(memtype) ||
> > +         unlikely(mem_type == HVMMEM_unused) )
> > +        return -EINVAL;
> > +
> > +    iter = 0;
> > +    rc = 0;
> > +    while ( iter < *nr )
> > +    {
> > +        unsigned long pfn = *first_pfn + iter;
> > +        p2m_type_t t;
> > +
> > +        get_gfn_unshare(d, pfn, &t);
> 
> Note the disagreement between function and parameter names.
> 

It was inherited but I'll correct it.

> > +        if ( p2m_is_paging(t) )
> > +        {
> > +            put_gfn(d, pfn);
> > +            p2m_mem_paging_populate(d, pfn);
> > +            rc = -EAGAIN;
> > +            break;
> > +        }
> > +        if ( p2m_is_shared(t) )
> > +        {
> > +            put_gfn(d, pfn);
> > +            rc = -EAGAIN;
> > +            break;
> > +        }
> > +        if ( !dm_op_allow_p2m_type_change(t, memtype[mem_type]) )
> > +        {
> > +            put_gfn(d, pfn);
> > +            rc = -EINVAL;
> > +            break;
> > +        }
> 
> Why can't all of these simply be return statements?
> 
> > +        rc = p2m_change_type_one(d, pfn, t, memtype[mem_type]);
> > +        put_gfn(d, pfn);
> > +
> > +        if ( rc )
> > +            break;
> 
> Or, again as done you know where, fold some of those redundant
> put_gfn()s as well, by using if/else-if.

That would be neater.

> 
> > +struct xen_dm_op_set_mem_type {
> > +    /* IN - number of contiguous pages */
> > +    uint32_t nr;
> > +    /* IN - first pfn in region */
> > +    uint64_t first_pfn;
> > +    /* IN - new hvmmem_type_t of region */
> > +    uint16_t mem_type;
> > +};
> 
> mem_type should be moved up, first_pfn be aligned, and explicit
> padding be inserted.

OK.

  Paul

> 
> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.