|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v8 03/16] microcode/intel: extend microcode_update_match()
On 05.08.2019 13:51, Chao Gao wrote:
> On Mon, Aug 05, 2019 at 09:27:58AM +0000, Jan Beulich wrote:
>> On 05.08.2019 07:58, Chao Gao wrote:
>>> On Fri, Aug 02, 2019 at 01:29:14PM +0000, Jan Beulich wrote:
>>>> On 01.08.2019 12:22, Chao Gao wrote:
>>>>> --- a/xen/arch/x86/microcode_intel.c
>>>>> +++ b/xen/arch/x86/microcode_intel.c
>>>>> @@ -134,14 +134,35 @@ static int collect_cpu_info(unsigned int cpu_num,
>>>>> struct cpu_signature *csig)
>>>>> return 0;
>>>>> }
>>>>>
>>>>> -static inline int microcode_update_match(
>>>>> - unsigned int cpu_num, const struct microcode_header_intel *mc_header,
>>>>> - int sig, int pf)
>>>>> +static enum microcode_match_result microcode_update_match(
>>>>> + const struct microcode_header_intel *mc_header, unsigned int sig,
>>>>> + unsigned int pf, unsigned int rev)
>>>>> {
>>>>> - struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu_num);
>>>>> -
>>>>> - return (sigmatch(sig, uci->cpu_sig.sig, pf, uci->cpu_sig.pf) &&
>>>>> - (mc_header->rev > uci->cpu_sig.rev));
>>>>> + const struct extended_sigtable *ext_header;
>>>>> + const struct extended_signature *ext_sig;
>>>>> + unsigned long data_size = get_datasize(mc_header);
>>>>> + unsigned int i;
>>>>> + const void *end = (const void *)mc_header + get_totalsize(mc_header);
>>>>> +
>>>>> + if ( sigmatch(sig, mc_header->sig, pf, mc_header->pf) )
>>>>> + return (mc_header->rev > rev) ? NEW_UCODE : OLD_UCODE;
>>>>
>>>> Both here and ...
>>>>
>>>>> + ext_header = (const void *)(mc_header + 1) + data_size;
>>>>> + ext_sig = (const void *)(ext_header + 1);
>>>>> +
>>>>> + /*
>>>>> + * Make sure there is enough space to hold an extended header and
>>>>> enough
>>>>> + * array elements.
>>>>> + */
>>>>> + if ( (end < (const void *)ext_sig) ||
>>>>> + (end < (const void *)(ext_sig + ext_header->count)) )
>>>>> + return MIS_UCODE;
>>>>> +
>>>>> + for ( i = 0; i < ext_header->count; i++ )
>>>>> + if ( sigmatch(sig, ext_sig[i].sig, pf, ext_sig[i].pf) )
>>>>> + return (mc_header->rev > rev) ? NEW_UCODE : OLD_UCODE;
>>>>
>>>> ... here there's again an assumption that there's strict ordering
>>>> between blobs, but as mentioned in reply to a later patch of an
>>>> earlier version this isn't the case. Therefore the function can't
>>>> be used to compare two arbitrary blobs, it may only be used to
>>>> compare a blob with what is already loaded into a CPU. I think it
>>>> is quite important to mention this restriction in a comment ahead
>>>> of the function.
>>>>
>>>> The code itself looks fine to me, and a comment could perhaps be
>>>> added while committing; with such a comment
>>>
>>> Agree. Because there will be a version 9, I can add a comment.
>>
>> Having seen the uses in later patches, I think a comment is not the
>> way to go. Instead I think you want to first match _both_
>> signatures against the local CPU (unless e.g. for either side this
>> is logically guaranteed),
>
> Yes. It is guaranteed at the first place: we ignore any patch that
> doesn't match with the CPU signature when parsing the ucode blob.
Well, if that's the case, then perhaps a comment is really all
that's needed, i.e. ...
>> and return DIS_UCODE upon mismatch. Only
>> then should you actually compare the two signatures. While from a
>> pure, abstract patch ordering perspective this isn't correct, we
>> only care about patches applicable to the local CPU anyway, and for
>> that case the extra restriction is fine. This way you'll be able to
>> avoid taking extra precautions in vendor-independent code just to
>> accommodate an Intel specific requirement.
>
> Yes. I agree and it seems that no further change is needed except
> the implementation of ->compare_patch. Please correct me if I am wrong.
... maybe not even the change here.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |