[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v18 01/10] x86: add generic resource (e.g. MSR) access hypercall



On Tue, Sep 30, 2014 at 01:12:42PM +0100, Jan Beulich wrote:
> >>> On 30.09.14 at 12:49, <chao.p.peng@xxxxxxxxxxxxxxx> wrote:
> > +static unsigned int check_resource_access(struct xen_resource_access *ra)
> > +{
> > +    xenpf_resource_entry_t *entry;
> > +    int ret = 0;
> > +    unsigned int i;
> > +
> > +    for ( i = 0; i < ra->nr_entries; i++ )
> > +    {
> > +        entry = ra->entries + i;
> > +
> > +        if ( entry->rsvd )
> > +        {
> > +            entry->u.ret = -EINVAL;
> > +            break;
> > +        }
> > +
> > +        switch ( entry->u.cmd )
> > +        {
> > +        case XEN_RESOURCE_OP_MSR_READ:
> > +        case XEN_RESOURCE_OP_MSR_WRITE:
> > +            if ( entry->idx >> 32 )
> > +                ret = -EINVAL;
> > +            else if ( !allow_access_msr(entry->idx) )
> > +                ret = -EACCES;
> > +            break;
> > +        default:
> > +            ret = -EINVAL;
> 
> -EOPNOTSUPP or any other suitable but more specific error code
> than -EINVAL.
Thanks Jan.
> 
> > +            break;
> > +        }
> > +
> > +        if ( ret )
> > +        {
> > +           entry->u.ret = ret;
> > +           break;
> > +        }
> 
> Other than Andrew said, this is okay (thus retaining the ->u.cmd
> value for the success case).
> 
> > +    }
> > +
> > +    /* Return the number of successes. */
> > +    return i;
> > +}
> > +
> > +static void resource_access(void *info)
> > +{
> > +    struct xen_resource_access *ra = info;
> > +    xenpf_resource_entry_t *entry;
> > +    int ret;
> > +    unsigned int i;
> > +
> > +    for ( i = 0; i < ra->nr_entries; i++ )
> > +    {
> > +        entry = ra->entries + i;
> > +
> > +        switch ( entry->u.cmd )
> > +        {
> > +        case XEN_RESOURCE_OP_MSR_READ:
> > +            ret = rdmsr_safe(entry->idx, entry->val);
> > +            break;
> > +        case XEN_RESOURCE_OP_MSR_WRITE:
> > +            ret = wrmsr_safe(entry->idx, entry->val);
> > +            break;
> > +        default:
> > +            ret = -EINVAL;
> > +            break;
> 
> BUG(). You checked invalid cmd-s already above.
OK
> 
> > +        }
> > +
> > +        if ( ret )
> > +        {
> > +           entry->u.ret = ret;
> > +           break;
> > +        }
> 
> Here you indeed should update ->u.ret unconditionally if we really
> want success to be indicated here too. I'm not sure this is needed
> though, since the return value of the hypercall should indicate the
> slot where to look for the op-specific error code.

I didn't update ->u.ret for succesful one is what I wanted to keep consistent
with check_resource_access().

> 
> > @@ -601,6 +689,75 @@ ret_t 
> > do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
> >      }
> >      break;
> >  
> > +    case XENPF_resource_op:
> > +    {
> > +        struct xen_resource_access ra;
> > +        uint32_t cpu;
> > +        XEN_GUEST_HANDLE(xenpf_resource_entry_t) guest_entries;
> > +
> > +        ra.nr_entries = op->u.resource_op.nr_entries;
> > +        if ( ra.nr_entries == 0 || ra.nr_entries > 
> > RESOURCE_ACCESS_MAX_ENTRIES )
> > +        {
> > +            ret = -EINVAL;
> 
> I don't think ra.nr_entries == 0 is a reason to fail the hypercall.
Do you mean 'ret = 0' ?
> 
> > +            break;
> > +        }
> > +
> > +        ra.entries = xmalloc_array(xenpf_resource_entry_t, ra.nr_entries);
> > +        if ( !ra.entries )
> > +        {
> > +            ret = -ENOMEM;
> > +            break;
> > +        }
> > +
> > +        guest_from_compat_handle(guest_entries, op->u.resource_op.entries);
> > +
> > +        if ( copy_from_guest(ra.entries, guest_entries, ra.nr_entries) )
> > +        {
> > +            xfree(ra.entries);
> > +            ret = -EFAULT;
> > +            break;
> > +        }
> > +
> > +        /* Do sanity check earlier to omit the potential IPI overhead. */
> > +        if ( check_resource_access(&ra) < ra.nr_entries )
> > +        {
> > +            /* Copy the return value for failed entry. */
> > +            if ( __copy_to_guest_offset(guest_entries, ret,
> > +                                        ra.entries + ret, 1) )
> > +                ret = -EFAULT;
> > +            else
> > +                ret = 0;
> 
> This should be the index of the failed entry. I guess it would be
> easier and more consistent if check_resource_access() too used
> ra.ret for passing back the failed index (which btw should be
> renamed to e.g. "done" - "ret" is no longer a suitable name).

I agree to use ra.ret for check_resource_access(). But I insist that return 0
here is more reasonable. As we use positive return value to indicate the
number of successful operations. But here we just passed some check and
have even not performed the access. Return index of the failed entry
will lead the caller to think the data for entries earlier is valid.

> 
> > +
> > +            xfree(ra.entries);
> > +            break;
> > +        }
> > +
> > +        cpu = op->u.resource_op.cpu;
> > +        if ( cpu == smp_processor_id() )
> > +            resource_access(&ra);
> > +        else if ( cpu_online(cpu) )
> 
> This continues to be wrong.

It's not lucky that I had sent out v18 before I saw your guys discussion.
I will use Andrew's suggestion.

> 
> > +            on_selected_cpus(cpumask_of(cpu), resource_access, &ra, 1);
> > +        else
> > +        {
> > +            xfree(ra.entries);
> > +            ret = -ENODEV;
> > +            break;
> > +        }
> > +
> > +        /* Copy all if succeeded or up to the failed entry. */
> > +        if ( __copy_to_guest_offset(guest_entries, 0, ra.entries,
> > +                                    min(ra.nr_entries, ra.ret + 1)) )
> 
> I don't see a need for min() here - ra.ret mustn't be out of range.
> If you're concerned, add an ASSERT().

For fully-succeeded case, ra.ret will be ra.nr_entries and ra.ret + 1 is out
of range.

Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.