[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 2/3] bitmap: import bitmap_{set/clear} from Linux 5.5



On 10.02.2020 10:20, Julien Grall wrote:
> Hi Jan,
> 
> On 10/02/2020 08:43, Jan Beulich wrote:
>> On 08.02.2020 15:37, Julien Grall wrote:
>>>
>>>
>>> On 05/02/2020 13:27, Jan Beulich wrote:
>>>> On 05.02.2020 14:21, Roger Pau Monné wrote:
>>>>> On Wed, Feb 05, 2020 at 09:46:25AM +0100, Jan Beulich wrote:
>>>>>> On 04.02.2020 18:34, Roger Pau Monne wrote:
>>>>>>> Import the functions and it's dependencies. Based on Linux 5.5, commit
>>>>>>> id d5226fa6dbae0569ee43ecfc08bdcd6770fc4755.
>>>>>>>
>>>>>>> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
>>>>>>
>>>>>> Thanks for going this route; two remarks / requests:
>>>>>>
>>>>>>> --- a/xen/common/bitmap.c
>>>>>>> +++ b/xen/common/bitmap.c
>>>>>>> @@ -212,6 +212,47 @@ int __bitmap_weight(const unsigned long *bitmap, 
>>>>>>> int bits)
>>>>>>>    #endif
>>>>>>>    EXPORT_SYMBOL(__bitmap_weight);
>>>>>>>    
>>>>>>> +void __bitmap_set(unsigned long *map, unsigned int start, int len)
>>>>>>> +{
>>>>>>> +       unsigned long *p = map + BIT_WORD(start);
>>>>>>> +       const unsigned int size = start + len;
>>>>>>> +       int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG);
>>>>>>> +       unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start);
>>>>>>> +
>>>>>>> +       while (len - bits_to_set >= 0) {
>>>>>>> +               *p |= mask_to_set;
>>>>>>> +               len -= bits_to_set;
>>>>>>> +               bits_to_set = BITS_PER_LONG;
>>>>>>> +               mask_to_set = ~0UL;
>>>>>>> +               p++;
>>>>>>> +       }
>>>>>>> +       if (len) {
>>>>>>> +               mask_to_set &= BITMAP_LAST_WORD_MASK(size);
>>>>>>> +               *p |= mask_to_set;
>>>>>>> +       }
>>>>>>> +}
>>>>>>> +EXPORT_SYMBOL(__bitmap_set);
>>>>>>> +
>>>>>>> +void __bitmap_clear(unsigned long *map, unsigned int start, int len)
>>>>>>> +{
>>>>>>> +       unsigned long *p = map + BIT_WORD(start);
>>>>>>> +       const unsigned int size = start + len;
>>>>>>> +       int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG);
>>>>>>> +       unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start);
>>>>>>> +
>>>>>>> +       while (len - bits_to_clear >= 0) {
>>>>>>> +               *p &= ~mask_to_clear;
>>>>>>> +               len -= bits_to_clear;
>>>>>>> +               bits_to_clear = BITS_PER_LONG;
>>>>>>> +               mask_to_clear = ~0UL;
>>>>>>> +               p++;
>>>>>>> +       }
>>>>>>> +       if (len) {
>>>>>>> +               mask_to_clear &= BITMAP_LAST_WORD_MASK(size);
>>>>>>> +               *p &= ~mask_to_clear;
>>>>>>> +       }
>>>>>>> +}
>>>>>>> +EXPORT_SYMBOL(__bitmap_clear);
>>>>>>
>>>>>> Despite all the other EXPORT_SYMBOL() in this file, personally I
>>>>>> would suggest to refrain from adding more. But I'm not going to
>>>>>> insist (until such time that they all get cleaned up).
>>>>>>
>>>>>>> --- a/xen/include/asm-x86/bitops.h
>>>>>>> +++ b/xen/include/asm-x86/bitops.h
>>>>>>> @@ -480,4 +480,6 @@ static inline int fls(unsigned int x)
>>>>>>>    #define hweight16(x) generic_hweight16(x)
>>>>>>>    #define hweight8(x) generic_hweight8(x)
>>>>>>>    
>>>>>>> +#define BIT_WORD(nr) ((nr) / BITS_PER_LONG)
>>>>>>
>>>>>> At first I thought - why for x86 only? Then I noticed Arm has an
>>>>>> almost identical #define already. Which in turn made me look at
>>>>>> Linux, where that #define lives in a common header. I think you
>>>>>> want to move the Arm one. Or wait, no - Arm's isn't even
>>>>>> compatible with the implementations of the functions you add.
>>>>>> This definitely needs taking care of, perhaps by way of ignoring
>>>>>> my request to go this route (as getting too involved).
>>>>>
>>>>> Urg, yes, I didn't realize that BIT_WORD on ARM is only meant to be
>>>>> used when the bitmap is mapped to an array of 32bit type elements.
>>>>>
>>>>> I could introduce BIT_LONG that would have the same definition on Arm
>>>>> and x86, and then modify the imported functions to use it, but IMO the
>>>>> right solution would be to change the Arm BIT_WORD macro to also use
>>>>> BITS_PER_LONG (and adjust the callers).
>>>>
>>>> So do I. Julien, Stefano?
>>>
>>> BIT_WORD used to use BITS_PER_LONG but this was changed in commit:
>>>
>>> commit cd338e967c598bf747b03dcfd9d8d45dc40bac1a
>>> Author: Ian Campbell <ian.campbell@xxxxxxxxxx>
>>> Date:   Thu May 8 16:13:55 2014 +0100
>>>
>>>       xen: arm: bitops take unsigned int
>>>
>>>       Xen bitmaps can be 4 rather than 8 byte aligned, so use the
>>> appropriate type.
>>>       Otherwise the compiler can generate unaligned 8 byte accesses and
>>> cause traps.
>>>
>>>       Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
>>>       Acked-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
>>>
>>> On 64-bit Arm, while we allow unaligned access, the atomic operations
>>> still enforce alignment.
>>>
>>> On 32-bit Arm, there are no unaligned access allowed. However,  the
>>> change of BIT_WORD is not a concern for 32-bit.
>>>
>>> I haven't check whether we still have places where bitops are used with
>>> 4 byte aligned memory. However, as the bitops take a void * in
>>> parameter, there are no promise on the alignment.
>>
>> I'm pretty sure for x86 the 32-bit guest compat code uses such, at
>> the very least.
> 
> I have spent some times looking at it and noticed, there are some in the 
> common code (e.g scheduler, IRQ...).
> 
>>
>>> Therefore, we can't rewrite BIT_WORD without addressing the underlying
>>> issues. Introducing BIT_LONG is probably the easiest way at the moment.
>>
>> Which would make use (continue to) deviate from Linux'es meaning of
>> BIT_WORD().
> 
> This would not be really the first time we deviate from Linux...

Of course. And there've been quite a few cases where I've argued
towards deviation. It's just that iirc you're one of those who
prefer less deviation, so I've been a little puzzled.

>>> However, our bitops really ought to specify the alignment in parameter
>>> to avoid such issues arising.
>>>
>>> I would be in favor of using unsigned long *.
>>
>> I don't think they should, as this complicates uses on non-64-bit
>> quantities. In fact I think bitops would better be permitted also
>> on sub-32-bit values. But anyway - x86 under the hood uses 32-bit
>> memory accesses too, in a number of cases. It's not obvious to me
>> why Arm64 couldn't do so as well, despite BIT_WORD() - for the
>> purposes of generic code - assuming "unsigned long" to be the base
>> "word".
> 
> My point is we should avoid ot use void * and implicetly require an 
> alignment (32-bit at the moment). This has resulted to numerous issues 
> in the past on Arm.

See how/why we have bitop_bad_size() on x86.

> To be clear, I am not requesting to handle the void *.
> 
> Anyway, blindly updating BIT_WORD() is going to break Arm. So you either 
> rename to current macro or create a new one.

That's understood. The question was rather what direction to go to
resolve the issue.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.