[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from all sockets



>>> On 28.01.14 at 15:23, "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx> wrote:
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@xxxxxxxxx> wrote:
>> > +    case XEN_SYSCTL_getcqminfo:
>> > +    {
>> > +        struct xen_socket_cqmdata *info;
>> > +        uint32_t num_sockets;
>> > +        uint32_t num_rmids;
>> > +        cpumask_t cpu_cqmdata_map;
>> 
>> Unless absolutely avoidable, not CPU masks on the stack please.
> 
> Okay, will allocate it by "xzalloc" function.

Hopefully this was just a thinko - zalloc_cpumask_var() is what you
want to use.

>> > +        num_sockets = min((unsigned
>> int)cpumask_weight(&cpu_cqmdata_map) + 1,
>> > +                          sysctl->u.getcqminfo.num_sockets);
>> > +        num_rmids = get_cqm_count();
>> > +        info = xzalloc_array(struct xen_socket_cqmdata,
>> > +                             num_rmids * num_sockets);
>> 
>> While unlikely right now, you ought to consider the case of this
>> multiplication overflowing.
>> 
>> Also - how does the caller know how big the buffer needs to be?
>> Only num_sockets can be restricted by it...
>> 
>> And what's worse - you allow the caller to limit num_sockets and
>> allocate info based on this limited value, but you don't restrict
>> cpu_cqmdata_map to just the socket covered, i.e. if the caller
>> specified a lower number, then you'll corrupt memory.
> 
> Currently the caller (libxc) sets big num_rmid and num_sockets which are the 
> maximum values that could be available inside the hypervisor.
> If you think this approach is not enough to ensure the security, what about 
> the caller (libxc) issue an hypercall to get the two values from hypervisor, 
> then allocating the same size of num_rmid and num_sockets?

Yes, that's the first half of it. And then _both_ values, as they're
determining the array dimensions, need to be passed back into
the "actual" call, and the hypervisor will need to take care not to
exceed these array dimensions.

>> And finally, I think the total size of the buffer here can easily
>> exceed a page, i.e. this then ends up being a non-order-0
>> allocation, which may _never_ succeed (i.e. the operation is
>> then rendered useless). I guest it'd be better to e.g. vmap()
>> the MFNs underlying the guest buffer.
> 
> Do you mean we check the size of the total size, and allocate MFNs one by 
> one, then vmap them?

That would also be a possibility, but isn't what I wrote above (as
being less resource efficient, but easier to implement).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.