[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/save: reserve HVM save record numbers that have been consumed...



On 19.12.2019 14:15, Durrant, Paul wrote:
>> From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>> Sent: 19 December 2019 13:08
>>
>> On 19/12/2019 12:37, Durrant, Paul wrote:
>>>> From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>>> Sent: 19 December 2019 12:16
>>>>
>>>> On 19/12/2019 12:04, Paul Durrant wrote:
>>>>> --- a/xen/include/public/arch-x86/hvm/save.h
>>>>> +++ b/xen/include/public/arch-x86/hvm/save.h
>>>>> @@ -639,6 +639,8 @@ struct hvm_msr {
>>>>>
>>>>>  #define CPU_MSR_CODE  20
>>>>>
>>>>> +/* Range 22 - 40 reserved for Amazon */
>>>> What about 21 and 22?  And where does the actual number stop, because
>>>> based on v1, its not 40.
>>>>
>>> The range is inclusive (maybe that's not obvious?). For some reason 21
>> was skipped but why do you say the top is not 40? That was what I set
>> HVM_SAVE_CODE_MAX to in v1.
>>
>> You also said that included prospective headroom, which definitely isn't
>> fair to reserve for ABI breakage reasons.
>>
>> Which numbers have actually been allocated?
>>
> 
> The problem is that I don't yet know for sure. I have definitely seen
> patches using 22 thru 34. It is *probably* safe to restrict to that but
> does it really cost that much more to reserve some extra space just in
> case? I.e. if someone else adds 41 vs. 35 is it going to make much of a
> difference?

Not _that much_, but still - it's a bodge, so let's try to limit it as
much as we can.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.