[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 06/13] libx86: Introduce a helper to serialise a cpuid_policy object



On Wed, Jul 04, 2018 at 05:46:29PM +0100, Andrew Cooper wrote:
> On 04/07/18 10:01, Jan Beulich wrote:
> >>>> On 03.07.18 at 22:55, <andrew.cooper3@xxxxxxxxxx> wrote:
> >> --- a/xen/common/libx86/cpuid.c
> >> +++ b/xen/common/libx86/cpuid.c
> >> @@ -34,6 +34,100 @@ const uint32_t *x86_cpuid_lookup_deep_deps(uint32_t 
> >> feature)
> >>  }
> >>  
> >>  /*
> >> + * Copy a single cpuid_leaf into a provided xen_cpuid_leaf_t buffer,
> >> + * performing boundary checking against the buffer size.
> >> + */
> >> +static int copy_leaf_to_buffer(uint32_t leaf, uint32_t subleaf,
> >> +                               const struct cpuid_leaf *data,
> >> +                               cpuid_leaf_buffer_t leaves,
> >> +                               uint32_t *curr_entry, const uint32_t 
> >> nr_entries)
> >> +{
> >> +    const xen_cpuid_leaf_t val = {
> >> +        leaf, subleaf, data->a, data->b, data->c, data->d,
> >> +    };
> >> +
> >> +    if ( *curr_entry == nr_entries )
> >> +        return -ENOBUFS;
> >> +
> >> +    if ( copy_to_buffer_offset(leaves, *curr_entry, &val, 1) )
> >> +        return -EFAULT;
> >> +
> >> +    ++*curr_entry;
> > Following on from what Wei has said - you don't mean to have a way
> > here then to indicate to a higher up caller how many slots would have
> > been needed?
> 
> I don't understand your query.  An individual build has a compile-time
> static maximum number of leaves, and this number can be obtained in the
> usual way by making a hypercall with a NULL guest handle.

Ah, this is what I was looking for. I think this should be wrapped into
a function btw.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.