[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 04/10] microcode: remove struct ucode_cpu_info



On Tue, Jun 04, 2019 at 09:13:46AM -0600, Jan Beulich wrote:
>>>> On 27.05.19 at 10:31, <chao.gao@xxxxxxxxx> wrote:
>> We can remove the per-cpu cache field in struct ucode_cpu_info since
>> it has been replaced by a global cache. It would leads to only one field
>> remaining in ucode_cpu_info. Then, this struct is removed and the
>> remaining field (cpu signature) is stored in per-cpu area.
>> 
>> Also remove 'microcode_resume_match' from microcode_ops because the
>> check is done in find_patch(). The cpu status notifier is also
>> removed. It was used to free the "mc" field to avoid memory leak.
>
>There's no find_patch() function anymore afaics. And I also think this
>should be a separate patch. The above isn't enough imo to justify ...
>
>>  int microcode_resume_cpu(unsigned int cpu)
>>  {
>>      int err;
>> -    struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>> -    struct cpu_signature nsig;
>> -    unsigned int cpu2;
>> +    struct cpu_signature *sig = &per_cpu(cpu_sig, cpu);
>>  
>>      if ( !microcode_ops )
>>          return 0;
>>  
>>      spin_lock(&microcode_mutex);
>>  
>> -    err = microcode_ops->collect_cpu_info(cpu, &uci->cpu_sig);
>> -    if ( err )
>> -    {
>> -        __microcode_fini_cpu(cpu);
>> -        spin_unlock(&microcode_mutex);
>> -        return err;
>> -    }
>> -
>> -    if ( uci->mc.mc_valid )
>> -    {
>> -        err = microcode_ops->microcode_resume_match(cpu, uci->mc.mc_valid);
>> -        if ( err >= 0 )
>> -        {
>> -            if ( err )
>> -                err = microcode_ops->apply_microcode(cpu);
>> -            spin_unlock(&microcode_mutex);
>> -            return err;
>> -        }
>> -    }
>> -
>> -    nsig = uci->cpu_sig;
>> -    __microcode_fini_cpu(cpu);
>> -    uci->cpu_sig = nsig;
>> -
>> -    err = -EIO;
>> -    for_each_online_cpu ( cpu2 )
>> -    {
>> -        uci = &per_cpu(ucode_cpu_info, cpu2);
>> -        if ( uci->mc.mc_valid &&
>> -             microcode_ops->microcode_resume_match(cpu, uci->mc.mc_valid) > 
>> 0 )
>> -        {
>> -            err = microcode_ops->apply_microcode(cpu);
>> -            break;
>> -        }
>> -    }
>
>... in particular the removal of this loop, the more that both the
>loop and the code ahead of it also call ->apply_microcode().

Ok. Will split it out from this patch and refine the patch description.

Basically, this function tries best to find a suitable patch from the
per-cpu cache and loads it. Currently, the per-cpu cache is replaced by
the global cache, and ->apply_microcode() loads the global cache rather
then the per-cpu cache. Hence, a simple invocation of ->apply_microcode()
is enough to apply the global cache during CPU hotplug or resuming from
hibernation.

>
>> @@ -281,7 +281,6 @@ static enum microcode_match_result compare_patch(
>>   */
>>  static int get_matching_microcode(const void *mc, unsigned int cpu)
>>  {
>> -    struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>
>Note how this was using "cpu".
>
>> @@ -308,17 +307,7 @@ static int get_matching_microcode(const void *mc, 
>> unsigned int cpu)
>>  
>>      pr_debug("microcode: CPU%d found a matching microcode update with"
>>               " version %#x (current=%#x)\n",
>> -             cpu, mc_header->rev, uci->cpu_sig.rev);
>> -    new_mc = xmalloc_bytes(total_size);
>> -    if ( new_mc == NULL )
>> -    {
>> -        printk(KERN_ERR "microcode: error! Can not allocate memory\n");
>> -        return -ENOMEM;
>> -    }
>> -
>> -    memcpy(new_mc, mc, total_size);
>> -    xfree(uci->mc.mc_intel);
>> -    uci->mc.mc_intel = new_mc;
>> +             cpu, mc_header->rev, this_cpu(cpu_sig).rev);
>
>Why "this_cpu()" here?

It should be a part of next patch.

Thanks
Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.