[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V2] x86, amd_ucode: Verify max allowed patch size before apply



>>> On 30.04.14 at 01:56, <aravind.gopalakrishnan@xxxxxxx> wrote:
> On 4/29/2014 4:33 PM, Aravind Gopalakrishnan wrote:
>> On 4/29/2014 3:02 AM, Jan Beulich wrote:
>>
>>>> @@ -123,8 +151,17 @@ static bool_t microcode_fits(const struct 
>>>> microcode_amd *mc_amd, int cpu)
>>>>       if ( (mc_header->processor_rev_id) != equiv_cpu_id )
>>>>           return 0;
>>>>   +    if ( !verify_patch_size(mc_amd->mpb_size) )
>>>> +    {
>>>> +        printk(XENLOG_DEBUG "microcode: patch size mismatch\n");
>>>> +        return -E2BIG;
>>>> +    }
>>>> +
>>>>       if ( mc_header->patch_id <= uci->cpu_sig.rev )
>>>> -        return 0;
>>>> +    {
>>>> +        printk(XENLOG_DEBUG "microcode: patch is already at 
>>>> required level or greater.\n");
>>>> +        return -EEXIST;
>>>> +    }
>>>>         printk(KERN_DEBUG "microcode: CPU%d found a matching 
>>>> microcode "
>>>>              "update with version %#x (current=%#x)\n",
>>> Honestly I'm rather hesitant to accept further generally useless
>>> messages, no matter that they get printed at KERN_DEBUG only. I'd
>>> much rather see these as well as the existing ones to be converted
>>> to pr_debug(), thus easily enabled if someone really needs to do
>>> debugging here. That's mainly because I (and I suppose other
>>> developers do so to) try to run with loglvl=all wherever possible,
>>> yet already on the 2x4-core box (not to speak of the newer 2x12-
>>> core one) I find these rather annoying.
>>
>> Hmm, Okay. I'll work on this and send an updated version..
> 
> couple of ideas about implementing this:

Actually I'd prefer to just go the microcode_intel.c route for now, unless
there's a compelling reason for something more involved.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.