[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/8] x86: detect and initialize Cache QoS Monitoring feature



On 25/11/13 08:57, Xu, Dongxiao wrote:
>>
>>>> +boolean_param("pqos", pqos_enabled);
>>>> +
>>>> +unsigned int cqm_res_count = 0;
>>>> +unsigned int cqm_upscaling_factor = 0;
>>>> +bool_t cqm_enabled = 0;
>>>> +struct cqm_res_struct *cqm_res_array = NULL;
>>>> +
>>>> +static void __init init_cqm(void)
>>>> +{
>>>> +    unsigned int eax, edx;
>>>> +    unsigned int max_cqm_rmid;
>>>> +
>>>> +    cpuid_count(0xf, 1, &eax, &cqm_upscaling_factor, &max_cqm_rmid,
>>> &edx);
>>>> +    if ( !(edx & QOS_MONITOR_EVTID_L3) )
>>>> +        return;
>>>> +
>>>> +    cqm_res_count = max_cqm_rmid + 1;
>>>> +
>>> Range check on cqm_res_count ?  If max_cqm_rmid ends up as -1 from the
>>> cpuid, we will allocate a 0 length array and crash later when reserving
>>> RMID 0
>> That's a good point.
>> I will add a range check here.
> According to the SDM, the biggest RMID value should be 0xffffffff (if 
> possible), so the biggest cqm_res_count is 0x100000000 (if possible).
>
> So what about define cqm_res_count as "unsigned long"?
>
> Thanks,
> Dongxiao

In which case there needs to be a command line resource limit.  2^32
cqm_res_struct's is unreasonably much to allocate by default.

Perhaps a "qos" custom param with "max-rmid=", set to a sensible default
such as 256.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.