[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [XEN PATCH v2 1/2] Check zone before merging adjacent blocks in heap



On Wed, 2020-02-05 at 10:22 +0000, Julien Grall wrote:
> However, I don't like the idea of relying on count_info == 0. Indeed, 
> there are valid case where count_info == 0 because it means the page is 
> inuse (PCC_state_inuse).
> 
> It might be best if we introduce a new page state that would be the 
> default value set in the frametable.
> 
> 0 may be an option if we assign a different value to PGC_state_inuse but 
> we would need to check if there are places relying on PGC_state_inuse == 
> 0. I know that assign_pages() does rely on it, I have sent a patch for 
> it yesterday.

We'd need an extra bit for that.

Perhaps we could recognise that PGC_broken is not valid in combination
with some of the four PGC_state states, and take that bit back to turn
PGC_state into a 3-bit field, which could hold 8 states of which I
think we only currently need 7?

 { zero, inuse, offlining, offlining_broken, offline, offline_broken, free }

> Another option is to use an all 1s value and initialize the frametable 
> to all 1s. But I am not entirely sure whether the allocator would be 
> able to cope with it.

We'd zero it in init_heap_pages() for every page, and it would only be
still set to all 1s for pages from the boot allocator; as long as all
1s *is* proven to be an invalid state (I think it is); then the same
kind of trick checking unlikely(pg->count_info == -1) in the free
functions as in my earlier straw man, would probably work.

I think I prefer zero being the "untouched" state though, rather than
all 1s. 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.